• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 638
  • 84
  • 37
  • 26
  • 15
  • 12
  • 8
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 987
  • 848
  • 584
  • 491
  • 453
  • 414
  • 401
  • 299
  • 202
  • 186
  • 184
  • 171
  • 160
  • 157
  • 154
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Deep GCNs with Random Partition and Generalized Aggregator

Xiong, Chenxin 25 November 2020 (has links)
Graph Convolutional Networks (GCNs) draws significant attention due to its power of representation learning on graphs. Recent works developed frameworks to train deep GCNs. Such works show impressive results in tasks like point cloud classification and segmentation, and protein interaction prediction. While for large-scale graphs, doing full-batch training by GCNs is still challenging especially when GCNs go deeper. By fully analyzing a clustering-based mini-batch training algorithm ClusterGCN, we propose random partition which is a more efficient and effective method to implement mini-batch training. Besides, selecting different permutation invariance function (such as max, mean or add) for neighbors’ information aggregation will result in every different results. Therefore, we propose to alleviate it by introducing a novel Generalized Aggregation Function. In this thesis, I analyze the drawbacks caused by ClusterGCN and discuss about its limits. I further compare the performance of ClusterGCN with random partition and the final experimental results show that simple random partition outperforms ClusterGCN with very obvious advantageous for node property prediction task. For the techniques which are commonly used to make GCNs go deeper, I demonstrate a better way of applying residual connections (pre-activation) to stack more layers for GCNs. Last, I show the complete work of training deeper GCNs with generalized aggregators and display the promising results over several datasets from the Open Graph Benchmark (OGB).
42

The prediction of condensation flow patterns by using artificial intelligence (AI) techniques

Seal, Michael Kevin January 2021 (has links)
Multiphase flow provides a solution to the high heat flux and precision required by modern-day gadgets and heat transfer devices as phase change processes make high heat transfer rates achievable at moderate temperature differences. An application of multiphase flow commonly used in industry is the condensation of refrigerants in inclined tubes. The identification of two-phase flow patterns, or flow regimes, is fundamental to the successful design and subsequent optimisation given that the heat transfer efficiency and pressure gradient are dependent on the flow structure of the working fluid. This study showed that with visualisation data and artificial neural networks (ANN), a machine could learn, and subsequently classify the separate flow patterns of condensation of R-134a refrigerant in inclined smooth tubes with more than 98% accuracy. The study considered 10 classes of flow pattern images acquired from previous experimental works that cover a wide range of flow conditions and the full range of tube inclination angles. Two types of classifiers were considered, namely multilayer perceptron (MLP) and convolutional neural networks (CNN). Although not the focus of this study, the use of a principal component analysis (PCA) allowed feature dimensionality reduction, dataset visualisation, and decreased associated computational cost when used together with multilayer perceptron neural networks. The superior two-dimensional spatial learning capability of convolutional neural networks allowed improved image classification and generalisation performance across all 10 flow pattern classes. In both cases, the classification was done sufficiently fast to enable real-time implementation in two-phase flow systems. The analysis sequence led to the development of a predictive tool for the classification of multiphase flow patterns in inclined tubes, with the goal that the features learnt through visualisation would apply to a broad range of flow conditions, fluids, tube geometries and orientations, and would even generalise well to identify adiabatic and boiling two-phase flow patterns. The method was validated by the prediction of flow pattern images found in the existing literature. / Dissertation (MEng)--University of Pretoria, 2021. / NRF / Mechanical and Aeronautical Engineering / MEng / Restricted
43

Rozpoznávání druhu jídla s pomocí hlubokých neuronových sítí / Food classification using deep neural networks

Kuvik, Michal January 2019 (has links)
The aim of this thesis is to study problems of deep convolutional neural networks and the connected classification of images and to experiment with the architecture of particular network with the aim to get the most accurate results on the selected dataset. The thesis is divided into two parts, the first part theoretically outlines the properties and structure of neural networks and briefly introduces selected networks. The second part deals with experiments with this network, such as the impact of data augmentation, batch size and the impact of dropout layers on the accuracy of the network. Subsequently, all results are compared and discussed with the best result achieved an accuracy of 86, 44% on test data.
44

Segmentace nádorů mozku v MRI datech s využitím hloubkového učení / Segmentation of brain tumours in MRI images using deep learning

Ustsinau, Usevalad January 2020 (has links)
The following master's thesis paper equipped with a short description of CT scans and MR images and the main differences between them, explanation of the structure of convolutional neural networks and how they implemented into biomedical image analysis, besides it was taken a popular modification of U-Net and tested on two loss-functions. As far as segmentation quality plays a highly important role for doctors, in experiment part it was paid significant attention to training quality and prediction results of the model. The experiment has shown the effectiveness of the provided algorithm and performed 100 training cases with the following analysis through the similarity. The proposed outcome gives us certain ideas for future improving the quality of image segmentation via deep learning techniques.
45

Exploring Ocean Animal Trajectory Pattern via Deep Learning

Wang, Su 23 May 2016 (has links)
We trained a combined deep convolutional neural network to predict seals’ age (3 categories) and gender (2 categories). The entire dataset contains 110 seals with around 489 thousand location records. Most records are continuous and measured in a certain step. We created five convolutional layers for feature representation and established two fully connected structure as age’s and gender’s classifier, respectively. Each classifier consists of three fully connected layers. Treating seals’ latitude and longitude as input, entire deep learning network, which includes 780,000 neurons and 2,097,000 parameters, can reach to 70.72% accuracy rate for predicting seals’ age and simultaneously achieve 79.95% for gender estimation.
46

Object Recognition with Progressive Refinement for Collaborative Robots Task Allocation

Wu, Wenbo 18 December 2020 (has links)
With the rapid development of deep learning techniques, the application of Convolutional Neural Network (CNN) has benefited the task of target object recognition. Several state-of-the-art object detectors have achieved excellent performance on the precision for object recognition. When it comes to applying the detection results for the real world application of collaborative robots, the reliability and robustness of the target object detection stage is essential to support efficient task allocation. In this work, collaborative robots task allocation is based on the assumption that each individual robotic agent possesses specialized capabilities to be matched with detected targets representing tasks to be performed in the surrounding environment which impose specific requirements. The goal is to reach a specialized labor distribution among the individual robots based on best matching their specialized capabilities with the corresponding requirements imposed by the tasks. In order to further improve task recognition with convolutional neural networks in the context of robotic task allocation, this thesis proposes an innovative approach for progressively refining the target detection process by taking advantage of the fact that additional images can be collected by mobile cameras installed on robotic vehicles. The proposed methodology combines a CNN-based object detection module with a refinement module. For the detection module, a two-stage object detector, Mask RCNN, for which some adaptations on region proposal generation are introduced, and a one-stage object detector, YOLO, are experimentally investigated in the context considered. The generated recognition scores serve as input for the refinement module. In the latter, the current detection result is considered as the a priori evidence to enhance the next detection for the same target with the goal to iteratively improve the target recognition scores. Both the Bayesian method and the Dempster-Shafer theory are experimentally investigated to achieve the data fusion process involved in the refinement process. The experimental validation is conducted on indoor search-and-rescue (SAR) scenarios and the results presented in this work demonstrate the feasibility and reliability of the proposed progressive refinement framework, especially when the combination of adapted Mask RCNN and D-S theory data fusion is exploited.
47

DEEP LEARNING-BASED PANICLE DETECTION BY USING HYPERSPECTRAL IMAGERY

Ruya Xu (9183242) 30 July 2020 (has links)
<div>Sorghum, which is grown internationally as a cereal crop that is robust to heat, drought, and disease, has numerous applications for food, forage, and biofuels. When monitoring the growth stages of sorghum, or phenotyping specific traits for plant breeding, it is important to identify and monitor the panicles in the field due to their impact relative to grain production. Several studies have focused on detecting panicles based on data acquired by RGB and multispectral remote sensing technologies. However, few experiments have included hyperspectral data because of its high dimensionality and computational requirements, even though the data provide abundant spectral information. Relative to analysis approaches, machine learning, and specifically deep learning models have the potential of accommodating the complexity of these data. In order to detect panicles in the field with different physical characteristics, such as colors and shapes, very high spectral and spatial resolution hyperspectral data were collected with a wheeled-based platform, processed, and analyzed with multiple extensions of the VGG-16 Fully Convolutional Network (FCN) semantic segmentation model.</div><div><br></div><div>In order to have correct positioning, orthorectification experiments were also conducted in the study to obtain the proper positioning of the image data acquired by the pushbroom hyperspectral camera at near range. The scale of the DSM derived from LiDAR that was used for orthorectification of the hyperspectral data was determined to be a critical issue, and the application of the Savitzky-Golay filter to the original DSM data was shown to contribute to the improved quality of the orthorectified imagery.</div><div><br></div><div>Three tuned versions of the VGG-16 FCN Deep Learning architecture were modified to accommodate the hyperspectral data: PCA&FCN, 2D-FCN, and 3D-FCN. It was concluded that all the three models can detect the late season panicles included in this study, but the end-to-end models performed better in terms of precision, recall, and the F-score metrics . Future work should focus on improving annotation strategies and the model architecture to detect different panicle varieties and to separate overlapping panicles based on an adequate quantities of training data acquired during the flowering stage.</div>
48

Investigation of real-time lightweight object detection models based on environmental parameters

Persson, Dennis January 2022 (has links)
As the world is moving towards a more digital world with the majority of people having tablets, smartphones and smart objects, solving real-world computational problems with handheld devices seems more common. Detection or tracking of objects using a camera is starting to be used in all kinds of fields, from self-driving cars, sorting items to x-rays, referenced in Introduction. Object detection is very calculation heavy which is why a good computer is necessary for it to work relatively fast. Object detection using lightweight models is not as accurate as a heavyweight model because the model trades accuracy for inference to work relatively fast on such devices. As handheld devices get more powerful and people have better access to object detection models that can work on limited-computing devices, the ability to build their own small object detection machines at home or at work increases substantially. Knowing what kind of factors that have a big impact on object detection can help the user to design or choose the correct model. This study aims to explore what kind of impact distance, angle and light have on Inceptionv2 SSD, MobileNetv3 Large SSD and MobileNetv3 Small SSD on the COCO dataset. The results indicate that distance is the most dominant factor on the Inceptionv2 SSD model using the COCO dataset. The data for the MobileNetv3 SSD models indicate that the angle might have the biggest impact on these models but the data is too inconclusive to say that with certainty. With the knowledge of knowing what kind of factors that affect a certain model’s performance the most, the user can make a more informed choice to their field of use.
49

Investigation on how presentation attack detection can be used to increase security for face recognition as biometric identification : Improvements on traditional locking system

Öberg, Fredrik January 2021 (has links)
Biometric identification has already been applied to society today, as today’s mobile phones use fingerprints and other methods like iris and the face itself. With growth for technologies like computer vision, the Internet of Things, Artificial Intelligence, The use of face recognition as a biometric identification on ordinary doors has become increasingly common. This thesis studies is looking into the possibility of replacing regular door locks with face recognition or supplement the locks to increase security by using a pre-trained state-of-the-art face recognition method based on a convolution neural network. A subsequent investigation concluded that a networks based face recognition are is highly vulnerable to attacks in the form of presentation attacks. This study investigates protection mechanisms against these forms of attack by developing a presentation attack detection and analyzing its performance. The obtained results from the proof of concept  showed that local binary patterns histograms as a presentation attack detection could help the state of art face recognition to avoid attacks up to 88\% of the attacks the convolution neural network approved without the presentation attack detection. However, to replace traditional locks, more work must be done to detect more attacks in form of both higher percentage of attacks blocked by the system and the types of attack that can be done. Nevertheless, as a supplement face recognition represents a promising technology to supplement traditional door locks, enchaining their security by complementing the authorization with biometric authentication. So the main contributions is that  by using simple older methods LBPH can help modern state of the art face regognition to detect presentation attacks according to the results of the tests. This study also worked to adapt this PAD to be suitable for low end edge devices to be able to adapt in an environment where modern solutions are used, which LBPH have.
50

Image Steganography Using Deep Learning Techniques

Anthony Rene Guzman (12468519) 27 April 2022 (has links)
<p>Digital image steganography is the process of embedding information withina cover image in  a  secure,  imperceptible,  and  recoverable  way.The  three  main  methods  of  digital  image steganography  are  spatial,  transform,  and  neural  network  methods. Spatial  methods  modify  the pixel  valuesof  an  image  to  embed  information,  while  transform  methods embed  hidden information within the frequency of the image.Neural network-based methods use neural networks to perform the hiding process, which is the focus of the proposed methodology.</p> <p>This  research  explores  the  use  of  deep  convolutional  neural  networks  (CNNs)  in  digital image steganography. This work extends an existing implementation that used a two-dimensional CNN to perform the preparation, hiding, and extraction phases of the steganography process. The methodology proposed in this research, however, introduced changes into the structure of the CNN and   used   a   gain   function   based   on   several   image   similarity   metrics   to   maximize   the imperceptibility between a cover and steganographic image.</p> <p>The  performance  of  the  proposed  method  was  measuredusing  some  frequently  utilized image metrics such as structured similarity index measurement (SSIM), mean square error (MSE), and  peak  signal  to  noise  ratio  (PSNR).  The  results  showed  that  the  steganographic  images produced by the proposed methodology areimperceptible to the human eye, while still providing good  recoverability.  Comparingthe  results  of  the  proposed  methodologyto  theresults  of  theoriginalmethodologyrevealed  that  our  proposed  network  greatly  improved  over  the  base methodology in terms of SSIM andcompareswell to existing steganography methods.</p>

Page generated in 0.0286 seconds