• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 254
  • 36
  • 15
  • 10
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 403
  • 403
  • 403
  • 263
  • 199
  • 183
  • 122
  • 95
  • 86
  • 80
  • 73
  • 69
  • 61
  • 60
  • 58
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Correcting for Patient Breathing Motion in PET Imaging

O'Briain, Teaghan 26 August 2022 (has links)
Positron emission tomography (PET) requires imaging times that last several minutes long. Therefore, when imaging areas that are prone to respiratory motion, blurring effects are often observed. This blurring can impair our ability to use these images for diagnostics purposes as well for treatment planning. While there are methods that are used to account for this effect, they often rely on adjustments to the imaging protocols in the form of longer scan times or subjecting the patient to higher doses of radiation. This dissertation explores an alternative approach that leverages state-of-the-art deep learning techniques to align the PET signal acquired at different points of the breathing motion. This method does not require adjustments to standard clinical protocols; and therefore, is more efficient and/or safer than the most widely adopted approach. To help validate this method, Monte Carlo (MC) simulations were conducted to emulate the PET imaging process, which represent the focus of our first experiment. The next experiment was the development and testing of our motion correction method. A clinical four-ring PET imaging system was modelled using GATE (v. 9.0). To validate the simulations, PET images were acquired of a cylindrical phantom, point source, and image quality phantom with the modeled system and the experimental procedures were also simulated. The simulations were compared against the measurements in terms of their count rates and sensitivity as well as their image uniformity, resolution, recovery coefficients, coefficients of variation, contrast, and background variability. When compared to the measured data, the number of true detections in the MC simulations was within 5%. The scatter fraction was found to be (31.1 ± 1.1)% and (29.8 ± 0.8)% in the measured and simulated scans, respectively. Analyzing the measured and simulated sinograms, the sensitivities were found to be 10.0 cps/kBq and 9.5 cps/kBq, respectively. The fraction of random coincidences were 19% in the measured data and 25% in the simulation. When calculating the image uniformity within the axial slices, the measured image exhibited a uniformity of (0.015 ± 0.005), while the simulated image had a uniformity of (0.029 ± 0.011). In the axial direction, the uniformity was measured to be (0.024 ± 0.006) and (0.040 ± 0.015) for the measured and simulated data, respectively. Comparing the image resolution, an average percentage difference of 2.9% was found between the measurements and simulations. The recovery coefficients calculated in both the measured and simulated images were found to be within the EARL ranges, except for that of the simulation of the smallest sphere. The coefficients of variation for the measured and simulated images were found to be 12% and 13%, respectively. Lastly, the background variability was consistent between the measurements and simulations, while the average percentage difference in the sphere contrasts was found to be 8.8%. The code used to run the GATE simulations and evaluate the described metrics has been made available (https://github.com/teaghan/PET_MonteCarlo). Next, to correct for breathing motion in PET imaging, an interpretable and unsupervised deep learning technique, FlowNet-PET, was constructed. The network was trained to predict the optical flow between two PET frames from different breathing amplitude ranges. As a result, the trained model groups different retrospectively-gated PET images together into a motion-corrected single bin, providing a final image with similar counting statistics as a non-gated image, but without the blurring effects that were initially observed. As a proof-of-concept, FlowNet-PET was applied to anthropomorphic digital phantom data, which provided the possibility to design robust metrics to quantify the corrections. When comparing the predicted optical flows to the ground truths, the median absolute error was found to be smaller than the pixel and slice widths, even for the phantom with a diaphragm movement of 21 mm. The improvements were illustrated by comparing against images without motion and computing the intersection over union (IoU) of the tumors as well as the enclosed activity and coefficient of variation (CoV) within the no-motion tumor volume before and after the corrections were applied. The average relative improvements provided by the network were 54%, 90%, and 76% for the IoU, total activity, and CoV, respectively. The results were then compared against the conventional retrospective phase binning approach. FlowNet-PET achieved similar results as retrospective binning, but only required one sixth of the scan duration. The code and data used for training and analysis has been made publicly available (https://github.com/teaghan/FlowNet_PET). The encouraging results provided by our motion correction method present the opportunity for many possible future applications. For instance, this method can be transferred to clinical patient PET images or applied to alternative imaging modalities that would benefit from similar motion corrections. When applied to clinical PET images, FlowNet-PET would provide the capability of acquiring high quality images without the requirement for either longer scan times or subjecting the patients to higher doses of radiation. Accordingly, the imaging process would likely become more efficient and/or safer, which would be appreciated by both the health care institutions and their patients. / Graduate
92

Using Machine Learning Techniques to Understand the Biophysics of Demyelination

Rezk, Ahmed Hany Mohamed Hassan 15 August 2022 (has links)
Demyelination is the process where the insulating layer of axons known as myelin is damaged. This affects the propagation of action potentials along axons which can have deteriorating consequences on the motor activity of an organism. Thus it is important to understand the biophysical effects of demyelination to improve the diagnostics of its diseases. We trained a Convolutional Neural Network (CNN) on Coherent anti-Stokes Raman scattering (CARS) microscope images of mice spinal cord inflicted with the demyelinating disease Experimental Autoimmune Encephalomyelitis (EAE). Our CNN was able to classify the images reliably based on clinical scores assigned to the mice. We then synthesized our own images of the spinal cord regions using a 2D Biased Random Walk. These images are simplified versions of the original CARS images and show homogenously myelinated axons, unlike the heterogeneous nerve fibres found in real spinal cords. The images were fed into the trained CNN as an attempt to develop a clinical connection to the biophysical effects of demyelination. We found that the trained CNN was indeed able to capture structural features related to demyelination which can allow us to constrain demyelination models such that they include the simulated parameters of the synthesized images.
93

Predicting Lung Cancer using Deep Learning to Analyze Computed Tomography Images

Abunajm, Saleh 22 August 2022 (has links)
No description available.
94

Comparison Of Object Detection Models - to detect recycle logos on tetra packs

Kamireddi, Sree Chandan January 2022 (has links)
Background: Manufacturing and production of daily used products using recyclable materials took a steep incline over the past few years. The recyclable packages that are being considered for this thesis are Tetra Packs. Tetra packs are widely used for packaging liquid foods. A few recyclable methods are being used to recycle such tetra packs which use the barcode behind them to scan and give which recyclable method the particular tetra pack has to go through. In some cases, the barcode might get worn off due to excessive usage leading to a problem. Therefore there needs to be a research that has to be carried out to address this problem and find a solution to the same.  Objectives: The objectives to address and fulfill the aim of this thesis are : To find/create the necessary data set containing clear pictures of the tetra packs with visible recyclable logos. To draw bounding boxes around the objects i.e., logos for training the models. To test the data set by applying all four Deep Learning models. To compare each of the models on speed and the performance metrics i.e, mAP and IoU and identify the best algorithm among them.  Methods: To answer the research question we have chosen one research methodol- ogy which is Experiment.Results: YOLOv5 is considered as the best algorithm among the four algorithms we are comparing. Speed of YOLOv5, SSD and Faster-RCNN were found to be similar i.e, 0.2 seconds whereas Mask-RCNN was the slowest with the detection speed of 1.0 seconds. The mAP score of SSD is 0.86 which is the highest among the four followed by YOLOv5 at 0.771, Faster-RCNN at 0.67 and Mask-RCNN at 0.62. IoU score of Faster-RCNN is 0.96 which is the highest among the four followed by YOLOv5 at 0.95, SSD at 0.50 and Mask-RCNN at 0.321. On comparing all the above results YOLOv5 is concluded as the best algorithm among the four as it is relatively fast and accurate without any major draw-backs in any category.  Conclusions: Amongst the four algorithms Faster-RCNN, YOLO, SSD and Mask- RCNN, YOLOv5 is declared as the best algorithm after comparing all the models based on speed and the performance metrics mAP, IoU. YOLOv5 is considered as the best algorithm among the four algorithms we are comparing.
95

Mobile Object Detection using TensorFlow Lite and Transfer Learning / Objektigenkänning i mobila enheter med TensorFlow Lite

Alsing, Oscar January 2018 (has links)
With the advancement in deep learning in the past few years, we are able to create complex machine learning models for detecting objects in images, regardless of the characteristics of the objects to be detected. This development has enabled engineers to replace existing heuristics-based systems in favour of machine learning models with superior performance. In this report, we evaluate the viability of using deep learning models for object detection in real-time video feeds on mobile devices in terms of object detection performance and inference delay as either an end-to-end system or feature extractor for existing algorithms. Our results show a significant increase in object detection performance in comparison to existing algorithms with the use of transfer learning on neural networks adapted for mobile use. / Utvecklingen inom djuplärning de senaste åren innebär att vi är kapabla att skapa mer komplexa maskininlärningsmodeller för att identifiera objekt i bilder, oavsett objektens attribut eller karaktär. Denna utveckling har möjliggjort forskare att ersätta existerande heuristikbaserade algoritmer med maskininlärningsmodeller med överlägsen prestanda. Den här rapporten syftar till att utvärdera användandet av djuplärningsmodeller för exekvering av objektigenkänning i video på mobila enheter med avseende på prestanda och exekveringstid. Våra resultat visar på en signifikant ökning i prestanda relativt befintliga heuristikbaserade algoritmer vid användning  av djuplärning och överförningsinlärning i artificiella neurala nätverk.
96

Compact ConvNets with Ternary Weights and Binary Activations

Holesovsky, Ondrej January 2017 (has links)
Compact architectures, ternary weights and binary activations are two methods suitable for making neural networks more efficient. We introduce a) a dithering binary activation which improves accuracy of ternary weight networks with binary activations by randomizing quantization error, and b) a method of implementing ternary weight networks with binary activations using binary operations. Despite these new approaches, training a compact SqueezeNet architecture with ternary weights and full precision activations on ImageNet degrades classification accuracy significantly more than when training a less compact architecture the same way. Therefore ternary weights in their current form cannot be called the best method for reducing network size. However, the effect of weight decay on ternary weight network training should be investigated more in order to have more certainty in this finding. / Kompakta arkitekturer, ternära vikter och binära aktiveringar är två metoder som är lämpliga för att göra neurala nätverk effektivare. Vi introducerar a) en dithering binär aktivering som förbättrar noggrannheten av ternärviktsnätverk med binära aktiveringar genom randomisering av kvantiseringsfel, och b) en metod för genomförande ternärviktsnätverk med binära aktiveringar med användning av binära operationer. Trots dessa nya metoder, att träna en kompakt SqueezeNet-arkitektur med ternära vikter och fullprecisionaktiveringar på ImageNet försämrar klassificeringsnoggrannheten betydligt mer än om man tränar en mindre kompakt arkitektur på samma sätt. Därför kan ternära vikter i deras nuvarande form inte kallas bästa sättet att minska nätverksstorleken. Emellertid, effekten av weight decay på träning av ternärviktsnätverk bör undersökas mer för att få större säkerhet i detta resultat.
97

Calibration in Eye Tracking Using Transfer Learning / Kalibrering inom Eye Tracking genom överföringsträning

Masko, David January 2017 (has links)
This thesis empirically studies transfer learning as a calibration framework for Convolutional Neural Network (CNN) based appearance-based gaze estimation models. A dataset of approximately 1,900,000 eyestripe images distributed over 1682 subjects is used to train and evaluate several gaze estimation models. Each model is initially trained on the training data resulting in generic gaze models. The models are subsequently calibrated for each test subject, using the subject's calibration data, by applying transfer learning through network fine-tuning on the final layers of the network. Transfer learning is observed to reduce the Euclidean distance error of the generic models within the range of 12-21%, which is in line with current state-of-the-art. The best performing calibrated model shows a mean error of 29.53mm and a median error of 22.77mm. However, calibrating heatmap output-based gaze estimation models decreases the performance over the generic models. It is concluded that transfer learning is a viable calibration framework for improving the performance of CNN-based appearance based gaze estimation models. / Detta examensarbete är en empirisk studie på överföringsträning som ramverk för kalibrering av neurala faltningsnätverks (CNN)-baserade bildbaserad blickapproximationsmodeller. En datamängd på omkring 1 900 000 ögonrandsbilder fördelat över 1682 personer används för att träna och bedöma flertalet blickapproximationsmodeller. Varje modell tränas inledningsvis på all träningsdata, vilket resulterar i generiska modeller. Modellerna kalibreras därefter för vardera testperson med testpersonens kalibreringsdata via överföringsträning genom anpassning av de sista lagren av nätverket. Med överföringsträning observeras en minskning av felet mätt som eukilidskt avstånd för de generiska modellerna inom 12-21%, vilket motsvarar de bästa nuvarande modellerna. För den bäst presterande kalibrerade modellen uppmäts medelfelet 29,53mm och medianfelet 22,77mm. Dock leder kalibrering av regionella sannolikhetsbaserade blickapproximationsmodeller till en försämring av prestanda jämfört med de generiska modellerna. Slutsatsen är att överföringsträning är en legitim kalibreringsansats för att förbättra prestanda hos CNN-baserade bildbaserad blickapproximationsmodeller.
98

Gland Segmentation with Convolutional Neural Networks : Validity of Stroma Segmentation as a General Approach / Konvolutionella neurala nätverk för segmentering av körtel : Validitet hos stroma-segmentering som en allmän metod

BINDER, THOMAS January 2019 (has links)
The analysis of glandular morphology within histopathology images is a crucial step in determining the stage of cancer. Manual annotation is a very laborious task. It is time consuming and suffers from the subjectivity of the specialists that label the glands. One of the aims of computational pathology is developing tools to automate gland segmentation. Such an algorithm would improve the efficiency of cancer diag- nosis. This is a complex task as there is a large variability in glandular morphologies and staining techniques. So far, specialised models have given promising results focusing on only one organ. This work investigated the idea of a cross domain ap- proximation. Unlike parenchymae the stroma tissue that lies between the glands is similar throughout all organs in the body. Creating a model able to precisely seg- ment the stroma would pave the way for a cross organ model. It would be able to segment the tissue and therefore give access to gland morphologies of different organs. To address this issue, we investigated different new and former architec- tures such as the MILD-net which is the currently best performing algorithm of the GlaS challenge. New architectures were created based on the promising U shaped network as well as Xception and the ResNet for feature extraction. These networks were trained on colon histopathology images focusing on glands and on the stroma. The comparision of the different results showed that this initial cross domain ap- proximation goes into the right direction and incites for further developments.
99

Exploration and Comparison of Image-Based Techniques for Strawberry Detection

Liu, Yongxin 01 September 2020 (has links) (PDF)
Strawberry is an important cash crop in California, and its supply accounts for 80% of the US market [2]. However, in current practice, strawberries are picked manually, which is very labor-intensive and time-consuming. In addition, the farmers need to hire an appropriate number of laborers to harvest the berries based on the estimated volume. When overestimating the yield, it will cause a waste of human resources, while underestimating the yield will cause the loss of the strawberry harvest [3]. Therefore, accurately estimating harvest volume in the field is important to farmers. This paper focuses on an image-based solution to detect strawberries in the field by using the traditional computer vision technique and deep learning method. When strawberries are in different growth stages, there are considerable differences in their color. Therefore, various color spaces are first studied in this work, and the most effective color components are used in detecting strawberries and differentiating mature and immature strawberries. In some color channels such as the R color channel from the RGB color model, Hue color channel from the HSV color model, 'a' color channel from the Lab color model, the pixels belonging to ripe strawberries are clearly distinguished from the background pixels. Thus, the color-based K-mean cluster algorithm to detect red strawberries will be exploited. Finally, it achieves a 90.5% truth-positive rate for detecting red strawberries. For detecting the unripe strawberry, this thesis first trained the Support Vector Machine classifier based on the HOG feature. After optimizing the classifier through hard negative mining, the truth-positive rate reached 81.11%. Finally, when exploring the deep learning model, two detectors based on different pre-trained models were trained using TensorFlow Object Detection API with the acceleration of Amazon Web Services' GPU instance. When detecting in a single strawberry plant image, they have achieved truth-positive rates of 89.2% and 92.3%, respectively; while in the strawberry field image with multiple plants, they have reached 85.5% and 86.3%.
100

Attacking Computer Vision Models Using Occlusion Analysis to Create Physically Robust Adversarial Images

Loh, Jacobsen 01 June 2020 (has links) (PDF)
Self-driving cars rely on their sense of sight to function effectively in chaotic and uncontrolled environments. Thanks to recent developments in computer vision, specifically convolutional neural networks, autonomous vehicles have developed the ability to see at or above human-level capabilities, which in turn has allowed for rapid advances in self-driving cars. Unfortunately, much like humans being confused by simple optical illusions, convolutional neural networks are susceptible to simple adversarial inputs. As there is no overlap between the optical illusions that fool humans and the adversarial examples that threaten convolutional neural networks, little is understood as to why these adversarial examples dupe such advanced models and what effective mitigation techniques might exist to resolve these issues. This thesis focuses on these adversarial images. By extending existing work, this thesis is able to offer a unique perspective on adversarial examples. Furthermore, these extensions are used to develop a novel attack that can generate physically robust adversarial examples. These physically robust instances provide a unique challenge as they transcend both individual models and the digital domain, thereby posing a significant threat to the efficacy of convolutional neural networks and their dependent applications.

Page generated in 0.1123 seconds