• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 642
  • 84
  • 37
  • 26
  • 15
  • 12
  • 8
  • 7
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 997
  • 858
  • 588
  • 496
  • 458
  • 417
  • 403
  • 300
  • 203
  • 186
  • 184
  • 174
  • 162
  • 158
  • 154
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Estimation of Defocus Blur in Virtual Environments Comparing Graph Cuts and Convolutional Neural Network

Chowdhury, Prodipto 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Depth estimation is one of the most important problems in computer vision. It has attracted a lot of attention because it has applications in many areas, such as robotics, VR and AR, self-driving cars etc. Using the defocus blur of a camera lens is one of the methods of depth estimation. In this thesis, we have researched this technique in virtual environments. Virtual datasets have been created for this purpose. In this research, we have applied graph cuts and convolutional neural network (DfD-net) to estimate depth from defocus blur using a natural (Middlebury) and a virtual (Maya) dataset. Graph Cuts showed similar performance for both natural and virtual datasets in terms of NMAE and NRMSE. However, with regard to SSIM, the performance of graph cuts is 4% better for Middlebury compared to Maya. We have trained the DfD-net using the natural and the virtual dataset and then combining both datasets. The network trained by the virtual dataset performed best for both datasets. The performance of graph-cuts and DfD-net have been compared. Graph-Cuts performance is 7% better than DfD-Net in terms of SSIM for Middlebury images. For Maya images, DfD-Net outperforms Graph-Cuts by 2%. With regard to NRMSE, Graph-Cuts and DfD-net shows similar performance for Maya images. For Middlebury images, Graph-cuts is 1.8% better. The algorithms show no difference in performance in terms of NMAE. The time DfD-net takes to generate depth maps compared to graph cuts is 500 times less for Maya images and 200 times less for Middlebury images.
232

Design Space Exploration of Convolutional Neural Networks for Image Classification

Shah, Prasham 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Computer vision is a domain which deals with the goal of making technology as efficient as human vision. To achieve that goal, after decades of research, researchers have developed algorithms that are able to work efficiently on resource constrained hardware like mobile or embedded devices for computer vision applications. Due to their constant efforts, such devices have become capable for tasks like Image Classification, Object Detection, Object Recognition, Semantic Segmentation, and many other applications. Autonomous systems like self-driving cars, Drones and UAVs, are being successfully developed because of these advances in AI. Deep Learning, a part of AI, is a specific domain of Machine Learning which focuses on developing algorithms for such applications. Deep Learning deals with tasks like extracting features from raw image data, replacing pipelines of specialized models with single end-to-end models, making models usable for multiple tasks with superior performance. A major focus is on techniques to detect and extract features which provide better context for inference about an image or video stream. A deep hierarchy of rich features can be learned and automatically extracted from images, provided by the multiple deep layers of CNN models. CNNs are the backbone of Computer Vision. The reason that CNNs are the focus of attention for deep learning models is that they were specifically designed for image data. They are complicated but very effective in extracting features from an image or a video stream. After AlexNet won the ILSVRC in 2012, there was a drastic increase in research related with CNNs. Many state-of-the-art architectures like VGG Net, GoogleNet, ResNet, Inception-v4, Inception-Resnet-v2, ShuffleNet, Xception, MobileNet, MobileNetV2, SqueezeNet, SqueezeNext and many more were introduced. The trend behind the research depicts an increase in the number of layers of CNN to make them more efficient but with that, the size of the model increased as well. This problem was fixed with the advent of new algorithms which resulted in a decrease in model size. As a result, today we have CNN models, which are implemented on mobile devices. These mobile models are compact and have low latency, which in turn reduces the computational cost of the embedded system. This thesis resembles similar idea, it proposes two new CNN architectures, A-MnasNet and R-MnasNet, which have been derived from MnasNet by Design Space Exploration. These architectures outperform MnasNet in terms of model size and accuracy. They have been trained and tested on CIFAR-10 dataset. Furthermore, they were implemented on NXP Bluebox 2.0, an autonomous driving platform, for Image Classification.
233

Design Space Exploration of Convolutional Neural Networks for Image Classification

Shah, Prasham 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Computer vision is a domain which deals with the goal of making technology as efficient as human vision. To achieve that goal, after decades of research, researchers have developed algorithms that are able to work efficiently on resource constrained hardware like mobile or embedded devices for computer vision applications. Due to their constant efforts, such devices have become capable for tasks like Image Classification, Object Detection, Object Recognition, Semantic Segmentation, and many other applications. Autonomous systems like self-driving cars, Drones and UAVs, are being successfully developed because of these advances in AI. Deep Learning, a part of AI, is a specific domain of Machine Learning which focuses on developing algorithms for such applications. Deep Learning deals with tasks like extracting features from raw image data, replacing pipelines of specialized models with single end-to-end models, making models usable for multiple tasks with superior performance. A major focus is on techniques to detect and extract features which provide better context for inference about an image or video stream. A deep hierarchy of rich features can be learned and automatically extracted from images, provided by the multiple deep layers of CNN models. CNNs are the backbone of Computer Vision. The reason that CNNs are the focus of attention for deep learning models is that they were specifically designed for image data. They are complicated but very effective in extracting features from an image or a video stream. After AlexNet won the ILSVRC in 2012, there was a drastic increase in research related with CNNs. Many state-of-the-art architectures like VGG Net, GoogleNet, ResNet, Inception-v4, Inception-Resnet-v2, ShuffleNet, Xception, MobileNet, MobileNetV2, SqueezeNet, SqueezeNext and many more were introduced. The trend behind the research depicts an increase in the number of layers of CNN to make them more efficient but with that, the size of the model increased as well. This problem was fixed with the advent of new algorithms which resulted in a decrease in model size. As a result, today we have CNN models, which are implemented on mobile devices. These mobile models are compact and have low latency, which in turn reduces the computational cost of the embedded system. This thesis resembles similar idea, it proposes two new CNN architectures, A-MnasNet and R-MnasNet, which have been derived from MnasNet by Design Space Exploration. These architectures outperform MnasNet in terms of model size and accuracy. They have been trained and tested on CIFAR-10 dataset. Furthermore, they were implemented on NXP Bluebox 2.0, an autonomous driving platform, for Image Classification.
234

A Novel Deep Learning Approach for Emotion Classification

Ayyalasomayajula, Satya Chandrashekhar 14 February 2022 (has links)
Neural Networks are at the core of computer vision solutions for various applications. With the advent of deep neural networks Facial Expression Recognition (FER) has been a very ineluctable and challenging task in the field of computer vision. Micro-expressions (ME) have been quite prominently used in security, psychotherapy, neuroscience and have a wide role in several related disciplines. However, due to the subtle movements of facial muscles, the micro-expressions are difficult to detect and identify. Due to the above, emotion detection and classification have always been hot research topics. The recently adopted networks to train FERs are yet to focus on issues caused due to overfitting, effectuated by insufficient data for training and expression unrelated variations like gender bias, face occlusions and others. Association of FER with the Speech Emotion Recognition (SER) triggered the development of multimodal neural networks for emotion classification in which the application of sensors played a significant role as they substantially increased the accuracy by providing high quality inputs, further elevating the efficiency of the system. This thesis relates to the exploration of different principles behind application of deep neural networks with a strong focus towards Convolutional Neural Networks (CNN) and Generative Adversarial Networks (GAN) in regards to their applications to emotion recognition. A Motion Magnification algorithm for ME's detection and classification was implemented for applications requiring near real-time computations. A new and improved architecture using a Multimodal Network was implemented. In addition to the motion magnification technique for emotion classification and extraction, the Multimodal algorithm takes the audio-visual cues as inputs and reads the MEs on the real face of the participant. This feature of the above architecture can be deployed while administering interviews, or supervising ICU patients in hospitals, in the auto industry, and many others. The real-time emotion classifier based on state-of-the-art Image-Avatar Animation model was tested on simulated subjects. The salient features of the real-face are mapped on avatars that are build with a 3D scene generation platform. In pursuit of the goal of emotion classification, the Image Animation model outperforms all baselines and prior works. Extensive tests and results obtained demonstrate the validity of the approach.
235

Improved Deep Learning Approaches for Medical Image Analysis

Chen, Ming January 2021 (has links)
No description available.
236

Deep morphological quantification and clustering of brain cancer cells using phase-contrast imaging

Engberg, Jonas January 2021 (has links)
Glioblastoma Multiforme (GBM) is a very aggressive brain tumour. Previous studies have suggested that the morphological distribution of single GBM cells may hold information about the severity. This study aims to find if there is a potential for automated morphological qualification and clustering of GBM cells and what it shows. In this context, phase-contrast images from 10 different GBMcell cultures were analyzed. To test the hypothesis that morphological differences exist between the cell cultures, images of single GBM cells images were created from an image over the well using CellProfiler and Python. Singlecellimages were passed through multiple different feature extraction models to identify the model showing the most promise for this dataset. The features were then clustered and quantified to see if any differentiation exists between the cell cultures. The results suggest morphological feature differences exist between GBM cell cultures when using automated models. The siamese network managed to construct clusters of cells having very similar morphology. I conclude that the 10 cell cultures seem to have cells with morphological differences. This highlights the importance of future studies to find what these morphological differences imply for the patients' survivability and choice of treatment.
237

Deep Convolutional Neural Network's Applicability and Interpretability for Agricultural Machine Vision Systems / 深層畳み込みニューラルネットワークの農業用マシンビジョンシステムへの適用性と説明力

Harshana, Habaragamuwa 26 November 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(農学) / 甲第21429号 / 農博第2307号 / 学位論文||H30||N5157(農学部図書室) / 京都大学大学院農学研究科地域環境科学専攻 / (主査)教授 近藤 直, 准教授 小川 雄一, 教授 飯田 訓久 / 学位規則第4条第1項該当 / Doctor of Agricultural Science / Kyoto University / DGAM
238

Deep Learning Approach for Vision Navigation in Flight

McNally, Branden Timothy January 2018 (has links)
No description available.
239

Using convolutional neural network to generate neuro image template

Qian, Songyue January 2018 (has links)
No description available.
240

Detecting Image Forgery with Color Phenomenology

Stanton, Jamie Alyssa 30 May 2019 (has links)
No description available.

Page generated in 0.0257 seconds