• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 120
  • 12
  • Tagged with
  • 132
  • 132
  • 132
  • 131
  • 130
  • 37
  • 30
  • 24
  • 22
  • 21
  • 20
  • 19
  • 17
  • 16
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Visualizing osteonecrosis of jaws through neutrophil elastase : [11C]NES novel PET tracer

Dannberg, Amanda, Martinez, Theodora January 2023 (has links)
Radiation and medical drugs are used to fight head and neck cancer, but unfortunately in some cases these treatments cause development of other diseases and injuries. Osteoradionecrosis (ORN) and medical-related osteonecrosis of the jaw (MRONJ) are dreaded late complications in jaws from radiation therapy and medical drugs and cause great suffering to those affected. The full extent of ORN and MRONJ may be difficult to diagnose due to visualizing problems in quantifying boundaries of osteonecrosis and healthy tissues. Maxillofacial surgeons now use radiology and clinical appearance to differ affected bone, which may result in unprecise estimation of the area that is affected. As a possible adjuvant diagnostic procedure, visualizing osteonecrosis by examining neutrophil elastase (NE) activity in jaws was tested in patients. A newly developed positron emission tomography (PET) tracer specific for NE was used for observation and measurement in PET/CT images. An image processing software was used for visualization, segmentation, and analysis. Areas with osteonecrosis were identified in the ORN patients, but not in their entirety and all activity could not be equated with osteonecrosis as undiagnosed areas as well absorbed the tracer. Visualization of MRONJ displayed unexpectedly low activity in the diagnosed area.    The conclusion drawn from the results and the analysis is that NE activity can be found in osteonecrosis patients, but the activity itself does not provide complete information to visualize and quantify the diseased area and it cannot be equated with osteonecrosis. To verify NE activity as osteonecrosis, tissue samples from the affected area need to be collected for histological examination
122

Radiotherapy treatment strategy for prostate cancer with lymph node involvement / Strålbehandlingsstrategi för prostatacancer med misstänkt involverade lymfkörtlar

Östensson, Amanda January 2023 (has links)
Radiotherapy is a common and useful method for treating prostate cancer, often using gold fiducial markers in the prostate as guidance. However, when there is a high risk of lymph node involvement, the independent motion of volumes causes complications in patient positioning since there is a choice between position against the gold fiducial markers or the bone anatomy. This leads to expansion of margins for either the prostate or the pelvic lymph nodes. In this thesis two different treatment strategies were performed and compared against given treatment plans. The purpose was to evaluate the standard treatment and to be able to recommend a new clinical approach for treatment of high-risk prostate cancer. Nine high-risk prostate cancer patients with their given treatment plans were used as a baseline. The patients underwent a planning CT and five CBCTs during the treatment. Two new treatment plan setups were done, a robust treatment and a sequential treatment with three and nine different plans respectively. The baseline and the robust treatment used gold fiducial markers as reference, with a prescribed dose of 2.20 Gy over 35 fractions with a VMAT. The sequential treatment used both gold fiducial markers and bone anatomy as reference, done by 35 fractions with a prescribed dose of 0.6 Gy with a single arc and 1.6 Gy with a dual arc respectively. A total of thirteen different treatment plan setups for each patient were simulated 100 times each, resulting in 11700 simulated treatments in total. The resulting simulated treatments were evaluated by the percentage passing nine different clinical goals, as well as dose and percentage volume averages for these goals. The results from the simulated robust treatments showed a decrease in percentage passing and D98 for the prostate and an increase in percentage passing and D98 for the lymph nodes and vesicles compared to the baseline. An increase in percentage passing and D98 was seen in the sequential treatment strategy for both targets compared to the baseline. The rectum had a larger percentage passing the clinical goals and a lower V69, V74 and V59 for both the robust and sequential treatment strategies. The D2 for the external were lower in the robust treatment strategy but higher in the sequential treatment strategy, while the D2 to the femoral heads were lower for both compared to the baseline treatment strategy. In conclusion, an improved dose coverage was seen in the sequential strategy with good sparing of risk organs. The robust treatment strategy showed promising results for sparing risk organs, but with a less robust dose coverage of the prostate.
123

Vitiligo image classification using pre-trained Convolutional Neural Network Architectures, and its economic impact on health care / Vitiligo bildklassificering med hjälp av förtränade konvolutionella neurala nätverksarkitekturer och dess ekonomiska inverkan på sjukvården

Bashar, Nour, Alsaid Suliman, MRami January 2022 (has links)
Vitiligo is a skin disease where the pigment cells that produce melanin die or stop functioning, which causes white patches to appear on the body. Although vitiligo is not considered a serious disease, there is a risk that something is wrong with a person's immune system. In recent years, the use of medical image processing techniques has grown, and research continues to develop new techniques for analysing and processing medical images. In many medical image classification tasks, deep convolutional neural network technology has proven its effectiveness, which means that it may also perform well in vitiligo classification. Our study uses four deep convolutional neural networks in order to classify images of vitiligo and normal skin. The architectures selected are VGG-19, ResNeXt101, InceptionResNetV2 and Inception V3. ROC and AUC metrics are used to assess each model's performance. In addition, the authors investigate the economic benefits that this technology may provide to the healthcare system and patients. To train and evaluate the CNN models, the authors used a dataset that contains 1341 images in total. Because the dataset is limited, 5-fold cross validation is also employed to improve the model's prediction. The results demonstrate that InceptionV3 achieves the best performance in the classification of vitiligo, with an AUC value of 0.9111, and InceptionResNetV2 has the lowest AUC value of 0.8560. / Vitiligo är en hudsjukdom där pigmentcellerna som producerar melanin dör eller slutar fungera, vilket får vita fläckar att dyka upp på kroppen. Även om Vitiligo inte betraktas som en allvarlig sjukdom, det finns fortfarande risk att något är fel på en persons immun. Under de senaste åren har användningen av medicinska bildbehandlingstekniker vuxit och forskning fortsätter att utveckla nya tekniker för att analysera och bearbeta medicinska bilder. I många medicinska bildklassificeringsuppgifter har djupa konvolutionella neurala nätverk bevisat sin effektivitet, vilket innebär att den också kan fungera bra i Vitiligo klassificering. Vår studie använder fyra djupa konvolutionella neurala nätverk för att klassificera bilder av vitiligo och normal hud. De valda arkitekturerna är VGG-19, RESNEXT101, InceptionResNetV2 och Inception V3. ROC- och AUC mätvärden används för att bedöma varje modells prestanda. Dessutom undersöker författarna de ekonomiska fördelarna som denna teknik kan ge till sjukvårdssystemet och patienterna. För att träna och utvärdera CNN modellerna använder vi ett dataset som innehåller totalt 1341 bilder. Eftersom datasetet är begränsat används också 5-faldigt korsvalidering för att förbättra modellens förutsägelse. Resultaten visar att InceptionV3 uppnår bästa prestanda i klassificeringen av Vitiligo, med ett AUC -värde på 0,9111, och InceptionResNetV2 har det lägsta AUC -värdet på 0,8560.
124

Machine learning assisted decision support system for image analysis of OCT

Yacoub, Elias January 2022 (has links)
Optical Coherence Tomography (OCT) has been around for more than 30 years and is still being continuously improved. The department of ophthalmology is a part of Sahlgrenska Hospital that heavily uses OCT for helping people with the treatment of eye diseases. They are currently facing a problem where the time to go from an OCT scan to treatment is being increased due to having an overload of patient visits every day. Since it requires a trained expert to analyze each OCT scan, the increase of patients is too overwhelming for the few experts that the department has. It is believed that the next phase of this medical field will be through the adoption of machine learning technology. This thesis has been issued by Sahlgrenska University Hospital (SUH), and they want to address the problem that ophthalmology has by introducing the use of machine learning into their workflow. This thesis aims to determine the best suited CNN through training and testing of pre-trained models and to build a tool that a model can be integrated into for use in ophthalmology. Transfer learning was used to compare three different types of pre-trained models offered by Keras, namely VGG16, InceptionResNet50V2 and ResNet50V2. They were all trained on an open dataset containing 84495 OCT images categorized into four different classes. These include the three diseases Choroidal Neovascularization (CNV), Diabetic Macular Edema (DME), drusen and normal eyes. To further improve the accuracy of the models, oversampling, undersampling, and data augmentation were applied to the training set and then tested in different variations. A web application was built using Tensorflow.js and Node.js that the best-performed model later was integrated into. The VGG16 model performed the best with only oversampling applied out of the three. It yielded an average of 95% precision, 95% recall and got a 95% F1-score. The second was the Inception model with only oversampling applied that got an average of 93% precision, 93% recall and a 93% F1-score. Last came the ResNet model with an average of 93% precision, 92% recall and a 92% F1-score. The results suggest that oversampling is the overall best technique for this given dataset. The chosen data augmentation techniques only lead to models performing marginally worse in all cases. It also suggests that pre-trained models with more parameters, such as the VGG16 model, have more feature mappings and, therefore, achieve higher accuracy. On this basis, parameters and better mappings of features should be taken into account when using pre-trained models.
125

Generative Adversarial Networks for Image-to-Image Translation on Street View and MR Images

Karlsson, Simon, Welander, Per January 2018 (has links)
Generative Adversarial Networks (GANs) is a deep learning method that has been developed for synthesizing data. One application for which it can be used for is image-to-image translations. This could prove to be valuable when training deep neural networks for image classification tasks. Two areas where deep learning methods are used are automotive vision systems and medical imaging. Automotive vision systems are expected to handle a broad range of scenarios which demand training data with a high diversity. The scenarios in the medical field are fewer but the problem is instead that it is difficult, time consuming and expensive to collect training data. This thesis evaluates different GAN models by comparing synthetic MR images produced by the models against ground truth images. A perceptual study is also performed by an expert in the field. It is shown by the study that the implemented GAN models can synthesize visually realistic MR images. It is also shown that models producing more visually realistic synthetic images not necessarily have better results in quantitative error measurements, when compared to ground truth data. Along with the investigations on medical images, the thesis explores the possibilities of generating synthetic street view images of different resolution, light and weather conditions. Different GAN models have been compared, implemented with our own adjustments, and evaluated. The results show that it is possible to create visually realistic images for different translations and image resolutions.
126

Deep Neural Network for Classification of H&E-stained Colorectal Polyps : Exploring the Pipeline of Computer-Assisted Histopathology

Brunzell, Stina January 2024 (has links)
Colorectal cancer is one of the most prevalent malignancies globally and recently introduced digital pathology enables the use of machine learning as an aid for fast diagnostics. This project aimed to develop a deep neural network model to specifically identify and differentiate dysplasia in the epithelium of colorectal polyps and was posed as a binary classification problem. The available dataset consisted of 80 whole slide images of different H&E-stained polyp sections, which were parted info smaller patches, annotated by a pathologist. The best performing model was a pre-trained ResNet-18 utilising a weighted sampler, weight decay and augmentation during fine tuning. Reaching an area under precision-recall curve of 0.9989 and 97.41% accuracy on previously unseen data, the model’s performance was determined to underperform compared to the task’s intra-observer variability and be in alignment with the inter-observer variability. Final model made publicly available at https://github.com/stinabr/classification-of-colorectal-polyps.
127

Photoplethysmography for Non-Invasive Measurement of Cerebral Blood Flow: Calibration of a Wearable Custom-Made PPGSensor / Fotopletismografy för Icke-Invasiv Mätning av Cerebralt Blodflöde: Kalibering av en Egentilverkad Bärbar PPG-Sensor

Spadolini, Vittorio January 2024 (has links)
Stroke is an enormous global burden, six and a half-million people die fromstroke annually [1]. Effectively monitoring blood hemodynamic parameters suchas blood velocity and volume flow permits to help and cure people. This projectaimed to calibrate a custom-made wearable system for measuring cerebral bloodflow (CBF) using a photoplethysmography (PPG) sensor. The measurementswere validated using Doppler ultrasound as a reference method. Five (N=5)subjects (age = 24±1.41 years) were selected for the project. The PPG and Dopplerultrasound probe were placed above the left and right common carotid arteries(CCA), respectively. Measurements were taken simultaneously for 12 secondseach, with six consecutive measurements per subject and 2 time-synchronizedECG recordings. Subsequently, using an extraction algorithm the velocityenvelope (TAMEAN) was extracted from the Doppler image to obtain the bloodvolume flow (ml/min). After synchronization, the PPG signal output expressedin volts was calibrated to the corresponding volume, and a calibration curve wascreated.The extraction algorithm achieved remarkable results, with almost perfectcorrelation with the Doppler image reference, rT AM EAN =0.951 and rvolume=0.975demonstrating its reliability. Challenges encountered during postprocessingand synchronization highlighted the need for careful refinement in the projectframework. Despite successful signal processing and alignment techniques,calibration results were suboptimal due to synchronization difficulties andmotion artifacts. Limitations included impractical measurement locations andsusceptibility to movement artifacts. The calibration process did not yield theexpected outcomes and the project aim was not achieved. All the linear regressionmodels for each subject failed to accurately predict the volume flow based on themeasured voltages. Future work could focus on refining calibration procedures,improving synchronization methods, and expanding studies to include largercohorts. Although the wearable device was tested, the project’s goal was onlypartially achieved, underscoring the complexity of accurately measuring cerebralblood flow using PPG sensors. / Stroke är en enorm global börda, sex och en halv miljon människor dör av strokeårligen [1]. Effektiv övervakning av hemodynamiska blodparametrar såsomblodflödeshastighet och volymflöde gör det möjligt att hjälpa och bota människor.Detta projekt syftade till att kalibrera ett specialtillverkat bärbart system föratt mäta cerebralt blodflöde (CBF) med hjälp av en fotopletysmografisensor(PPG). Mätningarna validerades med Doppler-ultraljud som referensmetod. Fem(N=5) försökspersoner (ålder = 24±1.41 år) valdes ut för projektet. PPG- ochDoppler-ultraljudssonden placerades över vänster respektive höger gemensamhalsartär (CCA). Mätningar togs samtidigt i 12 sekunder vardera, med sexpå varandra följande mätningar per försöksperson och 2 tids-synkroniseradeEKG-inspelningar. Därefter användes en extraktionsalgoritm för att extraherahastighetskuvertet (TAMEAN) från Doppler-bilden för att få blodvolymflödet(ml/min). Efter synkronisering kalibrerades PPG-signalens utgång uttryckt i volttill motsvarande volym, och en kalibreringskurva skapades.Extraktionsalgoritmen uppnådde anmärkningsvärda resultat, med nästan perfektkorrelation med Doppler-bildreferensen, rT AM EAN =0.951 och rvolume=0.975,vilket visar dess tillförlitlighet. Utmaningar som uppstod under efterbearbetningoch synkronisering betonade behovet av noggrann förfining av projektetsramverk. Trots framgångsrik signalbehandling och justeringstekniker varkalibreringsresultaten suboptimala på grund av synkroniseringssvårigheteroch rörelseartefakter. Begränsningar inkluderade opraktiska mätplatser ochkänslighet för rörelseartefakter. Kalibreringsprocessen gav inte de förväntaderesultaten och projektmålet uppnåddes inte. Alla linjära regressionsmodellerför varje försöksperson misslyckades med att noggrant förutsäga volymflödetbaserat på de uppmätta spänningarna. Framtida arbete kan fokusera på att förfinakalibreringsprocedurer, förbättra synkroniseringsmetoder och utöka studier tillatt omfatta större kohorter. Även om den bärbara enheten testades, uppnåddesprojektets mål endast delvis, vilket understryker komplexiteten i att noggrantmäta cerebralt blodflöde med hjälp av PPG-sensorer.
128

Ett sannolikhetsbaserat kvalitetsmått förbättrar klassificeringen av oförväntade sekvenser i in situ sekvensering / A probability-based quality measure improves the classification of unexpected sequences in in situ sequencing

Nordesjö, Olle, Pontén, Victor, Herman, Stephanie, Ås, Joel, Jamal, Sabri, Nyberg, Alona January 2014 (has links)
In situ sekvensering är en metod som kan användas för att lokalisera differentiellt uttryck av mRNA direkt i vävnadssnitt, vilket kan ge viktiga ledtrådar om många sjukdomstillstånd. Idag förloras många av sekvenserna från in situ sekvensering på grund av det kvalitetsmått man använder för att säkerställa att sekvenser är korrekta. Det finns troligtvis möjlighet att förbättra prestandan av den nuvarande base calling-metoden eftersom att metoden är i ett tidigt utvecklingsskede. Vi har genomfört explorativ dataanalys för att undersöka förekomst av systematiska fel och korrigerat för dessa med hjälp av statistiska metoder. Vi har framförallt undersökt tre metoder för att korrigera för systematiska fel: I) Korrektion av överblödning som sker på grund avöverlappande emissionsspektra mellan fluorescenta prober. II) En sannolikhetsbaserad tolkningav intensitetsdata som resulterar i ett nytt kvalitetsmått och en alternativ klassificerare baseradpå övervakad inlärning. III) En utredning om förekomst av cykelberoende effekter, exempelvisofullständig dehybridisering av fluorescenta prober. Vi föreslår att man gör följande saker: Implementerar och utvärderar det sannolikhetsbaserade kvalitetsmåttet Utvecklar och implementerar den föreslagna klassificeraren Genomför ytterligare experiment för att påvisa eller bestrida förekomst av ofullständigdehybridisering / In situ sequencing is a method that can be used to localize differential expression of mRNA directly in tissue sections, something that can give valuable insights to many statest of disease. Today, many of the registered sequences from in situ sequencing are lost due to a conservative quality measure used to filter out incorrect sequencing reads. There is room for improvement in the performance of the current method for base calling since the technology is in an early stage of development. We have performed exploratory data analysis to investigate occurrence of systematic errors, and corrected for these by using various statistical methods. The primary methods that have been investigated are the following: I) Correction of emission spectra overlap resulting in spillover between channels. II) A probability-based interpretation of intensity data, resulting in a novel quality measure and an alternative classifier based on supervised learning. III) Analysis of occurrence of cycle dependent effects, e.g. incomplete dehybridization of fluorescent probes. We suggest the following: Implementation and evaluation of the probability-based quality measure Development and implementation of the proposed classifier Additional experiments to investigate the possible occurrence of incomplete dehybridization
129

Improving Brain Tumor Segmentation using synthetic images from GANs

Nijhawan, Aashana January 2021 (has links)
Artificial intelligence (AI) has been seeing a great amount of hype around it for a few years but more so now in the field of diagnostic medical imaging. AI-based diagnoses have shown improvements in detecting the smallest abnormalities present in tumors and lesions. This can tremendously help public healthcare. There is a large amount of data present in the field of biomedical imaging with the hospitals but only a small amount is available for the use of research due to data and privacy protection. The task of manually segmenting tumors in this magnetic resonance imaging (MRI) can be quite expensive and time taking. This segmentation and classification would need high precision which is usually performed by medical experts that follow clinical medical standards. Due to this small amount of data when used with machine learning models, the trained models tend to overfit. With advancing deep learning techniques it is possible to generate images using Generative Adversarial Networks (GANs). GANs has garnered a heap of attention towards itself for its power to produce realistic-looking images, videos, and audios. This thesis aims to use the synthetic images generated by progressive growing GANs (PGGAN) along with real images to perform segmentation on brain tumor MRI. The idea is to investigate whether the addition of this synthetic data improves the segmentation significantly or not. To analyze the quality of the images produced by the PGGAN, Multi-scale Similarity Index Measure (MS-SSIM) and Sliced Wasserstein Distance (SWD) are recorded. To exam-ine the segmentation performance, Dice Similarity Coefficient (DSC) and accuracy scores are observed. To inspect if the improved performance by synthetic images is significant or not, a parametric paired t-test and non-parametric permutation test are used. It could be seen that the addition of synthetic images with real images is significant for most cases in comparison to using only real images. However, this addition of synthetic images makes the model uncertain. The models’ robustness is tested using training-free uncertainty estimation of neural networks.
130

TransRUnet: 2D Detection and Segmentation of Lymphoma Lesions in Full-Body PET-CT Images / TransRUnet: 2D-detektion och segmentering av lymfomlesioner i helkroppsundersökning med PET-CT

Stahnke, Lasse January 2023 (has links)
Identification and localization of FDG-avid lymphoma lesions in PET-CT image volumes is of high importance for the diagnosis and monitoring of treatment progress in lymphoma patients. This process is tedious, time-consuming, and error-prone, due to large image volumes and the heterogeneity of lesions. Thus, a fully automatic method for lymphoma detection is desirable. The AutoPET challenge dataset contains 145 full-body FDG-PET-CT images of lymphoma patients with pixel-level segmentation of lesions. The Retina U-Net utilizes semantic segmentation maps for object detection through simultaneous segmentation and detection. More recently, transformer-based methods became increasingly popular due to their good performance. Here, TransRUnet is proposed, a 2D deep neural network capable of segmentation and object detection, combining the Retina U-Net with a Feature Pyramid Transformer. Firstly, a Retina U-Net was trained as a Baseline on 2D axial slices of 116 patient volumes from the AutoPET dataset, achieving an mAP of 0.377 and a DSC of 0.737 on the 29 test patients. Secondly, the TransRUnet was trained on the same patients, achieving an mAP and DSC of 0.285 and 0.732, respectively. Performance comparison based on mAP and DSC did not show significant differences (p = 0.596 and p = 0.940, for mAP and DSC, respectively) between the Retina U-Net and the TransRUnet. Furthermore, a substantial difference in FROC between the two models could not be observed. The ground truth data should be preprocessed to reduce noise in the training data or a 3D generalization of the TransRUnet should be used to improve the detection performance. / Att i PET-CT-bildvolymer identifiera och lokalisera lymfomlesioner med hög FDG-aviditet är av stor betydelse för diagnos och övervakning av behandlingseffekt hos lymfompatienter. Denna process är omständlig, tidskrävande och felbenägen på grund av stora bildvolymer och heterogeniteten hos lesionerna. Därför är det önskvärt med en helautomatisk metod för lymfomdetektion. AutoPET Challenge-datasetet innehåller 145 FDG-PET-CT-bilder av lymfom-patienter med segmentering av lesioner på pixelnivå. Retina U-Net använder semantiska segmenteringskartor för objektsdetektering genom samtidig segmentering och detektering. På senare tid har transformatorbaserade metoder blivit alltmer populära på grund av sina goda prestanda. Här föreslås TransRUnet, ett djupgående neuralt 2D-nätverk som kan segmentera och upptäcka objekt och som kombinerar Retina U-Net med en Feature Pyramid Transformer. I första steget tränades ett Retina U-Net som baslinje på 2D axialskivor av 116 patientvolymer från AutoPET-dataset, och uppnådde en mAP på 0,377 och en DSC på 0,737 på de 29 testpatienterna. I nästa steg tränades TransRUnet på samma patienter och uppnådde en mAP och DSC på 0,285 respektive 0,732. Jämförelse av prestanda baserat på mAP och DSC visade inga signifikanta skillnader (p = 0,596 och p = 0,940 för mAP respektive DSC) mellan Retina U-Net och TransRUnet. Dessutom kunde ingen väsentlig skillnad i FROC mellan de två modellerna observeras. Ground truth-data bör förbehandlas för att minska bruset i träningsdata eller också bör en 3D-generalisering av TransRUnet användas för att förbättra detektionsprestanda.

Page generated in 0.5829 seconds