• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1941
  • 313
  • 150
  • 112
  • 108
  • 69
  • 56
  • 46
  • 24
  • 20
  • 14
  • 13
  • 13
  • 13
  • 13
  • Tagged with
  • 3581
  • 3581
  • 974
  • 869
  • 791
  • 791
  • 645
  • 617
  • 578
  • 538
  • 530
  • 525
  • 479
  • 449
  • 447
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
531

Train Solver Protoxt files for Combo 5 and Combo 15

Tahrir Ibraq Siddiqui (11173185) 23 July 2021 (has links)
Training prototxt file containing the hyperparameter settings for combinations 5 and 15 of optimized training runs.
532

Training plots for Combo 5 and 15

Tahrir Ibraq Siddiqui (11173185) 23 July 2021 (has links)
Plots generated from training logs of combinations 5 and 15 of optimized training runs.
533

UNSUPERVISED AND SEMI-SUPERVISED LEARNING IN AUTOMATIC INDUSTRIAL IMAGE INSPECTION

Weitao Tang (12462516) 27 April 2022 (has links)
<p>It has been widely studied in industry production environment to apply computer version onX-ray images for automatic visual inspection. Traditional methods embrace image processingtechniques and require custom design for each product. Although the accuracy of this approachvaries, it often fall short to meet the expectations in the production environment. Recently, deeplearning algorithms have significantly promoted the capability of computer vision in various tasksand provided new prospects for the automatic inspection system. Numerous studies appliedsupervised deep learning to inspect industrial images and reported promising results. However,the methods used in these studies are often supervised, which requires heavy manual annotation.It is therefore not realistic in many manufacturing scenarios because products are constantlyupdated. Data collection, annotation and algorithm training can only be performed after thecompletion of the manufacturing process, causing a significant delay in training the models andestablishing the inspection system. This research was aimed to tackle the problem usingunsupervised and semi-supervised methods so that these computer vision-based machine learningapproaches can be rapidly deployed in real-life scenarios. More specifically, this dissertationproposed an unsupervised approach and a semi-supervised deep learning method to identifydefective products from industrial inspection images. The proposed methods were evaluated onseveral open source inspection datasets and a dataset of X-Ray images obtained from a die castingplant. The results demonstrated that the proposed approach achieved better results than otherstate-of-the-art techniques on several occasions.</p>
534

Towards Autonomous Unmanned Vehicle Systems

Cai, Sheng 09 December 2016 (has links)
As an emerging technology, autonomous Unmanned Vehicle Systems (UVS) have found not only many military applications, but also various civil applications. For example, Google, Amazon and Facebook are developing their UVS plans to explore new markets. However, there are still a lot of challenging problems which deter the UVS’s development. We study two important and challenging problems in this dissertation, i.e. localization and 3D reconstruction. Specifically, most GPS based localization systems are not very accurate and can have problems in areas where no GPS signals are available. Based on the Received Signal Strength Indication (RSSI) and Inertial Navigation System (INS), we propose a new hybrid localization system, which is very efficient and can account for dynamic communication environments. Extensive simulation results demonstrate the efficiency of the proposed localization system. Besides, 3D reconstruction is a key problem in autonomous navigation and hence very important for UVS.With the help of high-speed Internet and powerful cloud servers, the light-weight computers on the UVS can now execute computationally expensive computer vision based algorithms. We develop a 3D reconstruction scheme which employs cloud computing to perform realtime 3D reconstruction. Simulations and experiments show the efficacy and efficiency of our scheme.
535

Reconstructing and Interpreting the 30 Shape of Moving Objects

Ferrie, F. P January 1986 (has links)
Note:
536

F-SAL: A Framework for Fusion Based Semi-automated Labeling With Feedback

Zaidi, Ahmed January 2021 (has links)
In almost all computer vision and perception based applications, particularly with camera and lidar; state-of-the-art algorithms are all based upon deep neural networks which require large amounts of data. Thus, the ability to label data accurately and quickly is of great importance. Approaches to semi-automated labeling (SAL) thus far have relied on using state-of-the-art object detectors to assist with labeling; however, these approaches still require a significant number of manual corrections. Surprisingly, none of these approaches have considered labeling from the perspective of multiple diverse algorithms. In this thesis a new framework for semi-automated labeling is presented, it is called F-SAL which stands for Fusion Based Semi-automated Labeling. Firstly, F-SAL extends on the idea of SAL through introducing multi-algorithm fusion with learning based feedback. Secondly, it incorporates new stages such as uncertainty evaluation and diversity evaluation. All the algorithms and design choices regarding localization fusion, label fusion, uncertainty and diversity evaluation are presented and discussed in significant detail. The biggest advantage of F-SAL is that through the fusion of algorithms, the number of true detections is either more or equivalent to the best single detector; while the false alarms are suppressed significantly. In the case of a single detector, to lower the false alarm rate, detector parameters must be adjusted, which trade lower false alarms for fewer detections. With F-SAL, a lower false alarm rate can be achieved without sacrificing any detections, as false alarms are suppressed during fusion, and true detections are maximized through diversity. Results on several datasets for image and lidar data show that F-SAL outperforms the single best detector in all scenarios. / Thesis / Master of Applied Science (MASc)
537

Supervised and self-supervised deep learning approaches for weed identification and soybean yield prediction

Srivastava, Dhiraj 28 July 2023 (has links)
This research uncovers a novel pathway in precision agriculture, emphasizing the utilization of advanced supervised and self-supervised deep learning approaches for an innovative solution to weed detection and crop yield prediction. The study focuses on key weed species: Italian ryegrass in wheat, Palmer amaranth, and common ragweed in soybean, which are troublesome weeds in the United States. One of the most innovative components of this research is the debut of a self-supervised learning approach specifically tailored for soybean yield prediction using only unlabeled RGB images. This novel strategy presents a departure from traditional yield prediction methods that consider multiple variables, thus offering a more streamlined and efficient methodology that presents a significant contribution to the field. To address the monitoring of Italian ryegrass in wheat cultivation, a bespoke Convolutional Neural Network (CNN) model was developed. It demonstrated impressive precision and recall rates of 100% and 97.5% respectively, in accurately classifying Italian ryegrass in the wheat. Among three hyperparameter tuning methods, Bayesian optimization emerges as the most efficient, delivering optimal results in just 10 iterations, contrasting with 723 and 304 iterations required for grid search and random search respectively. Further, this study examines the performance of various classification and object detection algorithms on Unmanned Aerial Systems (UAS)-acquired images at different growth stages of soybean and Palmer amaranth. Both the Vision Transformer and EfficientNetB0 models display promising test accuracies of 97.69% and 93.26% respectively. However, considering a balance between speed and accuracy, YOLOv6s emerged as the most suitable object detection model for real-time deployment, achieving an 82.6% mean average precision (mAP) at an average inference speed of 8.28 milliseconds. Furthermore, a self-supervised contrastive learning approach was introduced for automating the labeling of Palmer amaranth and soybean. This method achieved a notable 98.5% test accuracy, indicating the potential for cost-efficient data acquisition and labeling to advance precision agriculture research. A separate study was conducted to detect common ragweed in soybean crops and the prediction of soybean yield impacted by varying weed densities. The Vision Transformer and MLP-Mixer models achieve test accuracies of 97.95% and 96.92% for weed detection, with YOLOv6 outperforming YOLOv5, attaining an mAP of 81.5% at an average inference speed of 7.05 milliseconds. Self-supervised learning-based yield prediction models reach a coefficient of determination of up to 0.80 and a correlation coefficient of 0.88 between predicted and actual yield. In conclusion, this research elucidates the transformative potential of self-supervised and supervised deep learning techniques in revolutionizing weed detection and crop yield prediction practices. Its findings significantly contribute to precision agriculture, paving the way for efficient and cost-effective site-specific weed management strategies. This, in turn, promotes reduced environmental impact and enhances the economic sustainability of farming operations. / Master of Science in Life Sciences / This novel research provides a fresh approach to overcoming some of the biggest challenges in modern agriculture by leveraging the power of advanced artificial intelligence (AI) techniques. The study targets key disruptive weed species, such as, Italian ryegrass in wheat, Palmer amaranth, and common ragweed in soybean, all of which have the potential to significantly reduce crop yields. The studies were first conducted to detect Italian ryegrass in wheat crops, utilizing RGB images. A model is built using a complex AI system called a Convolutional Neural Network (CNN) to detect this weed with remarkable accuracy. The study then delves into the use of drones to take pictures of different growth stages of soybean and Palmer amaranth plants. These images were then analyzed by various AI models to assess their ability to accurately identify the plants. The results show some promising findings, with one model being quick and accurate enough to be potentially used in real-time applications. The most important part of this research is the application of self-supervised learning, which learns to label Palmer amaranth and soybean plants on its own. This novel method achieved impressive test accuracy, suggesting a future where data collection and labeling could be done more cost-effectively. In another related study, we detected common ragweed in soybean crops and predicted soybean yield based on various weed densities. AI models once again performed well for weed detection and yield prediction tasks, with self-supervised models showcasing high agreement between predicted and actual yields. In conclusion, this research showcases the exciting potential of self-teaching and supervised AI in transforming the way we detect weeds and predict crop yields. These findings could potentially lead to more efficient and cost-effective ways of managing weeds at specific sites. This could have a positive impact on the environment and improve the economic sustainability of farming operations, paving the way for a greener future.
538

Strawberry Detection Under Various Harvestation Stages

Fitter, Yavisht 01 March 2019 (has links) (PDF)
This paper analyzes three techniques attempting to detect strawberries at various stages in its growth cycle. Histogram of Oriented Gradients (HOG), Local Binary Patterns (LBP) and Convolutional Neural Networks (CNN) were implemented on a limited custom-built dataset. The methodologies were compared in terms of accuracy and computational efficiency. Computational efficiency is defined in terms of image resolution as testing on a smaller dimensional image is much quicker than larger dimensions. The CNN based implementation obtained the best results with an 88% accuracy at the highest level of efficiency as well (600x800). LBP generated moderate results with a 74% detection accuracy at an inefficient rate (5000x4000). Finally, HOG’s results were inconclusive as it performed poorly early on, generating too many misclassifications.
539

SQUEEZE AND EXCITE RESIDUAL CAPSULE NETWORK FOR EMBEDDED EDGE DEVICES

Sami Naqvi (13154274) 08 September 2022 (has links)
<p>During recent years, the field of computer vision has evolved rapidly. Convolutional Neural Networks (CNNs) have become the chosen default for implementing computer vision tasks. The popularity is based on how the CNNs have successfully performed the wellknown</p> <p>computer vision tasks such as image annotation, instance segmentation, and others with promising outcomes. However, CNNs have their caveats and need further research to turn them into reliable machine learning algorithms. The disadvantages of CNNs become more evident as the approach to breaking down an input image becomes apparent. Convolutional neural networks group blobs of pixels to identify objects in a given image. Such a</p> <p>technique makes CNNs incapable of breaking down the input images into sub-parts, which could distinguish the orientation and transformation of objects and their parts. The functions in a CNN are competent at learning only the shift-invariant features of the object in an image. The discussed limitations provides researchers and developers a purpose for further enhancing an effective algorithm for computer vision.</p> <p>The opportunity to improve is explored by several distinct approaches, each tackling a unique set of issues in the convolutional neural network’s architecture. The Capsule Network (CapsNet) which brings an innovative approach to resolve issues pertaining to affine transformations</p> <p>by sharing transformation matrices between the different levels of capsules. While, the Residual Network (ResNet) introduced skip connections which allows deeper networks</p> <p>to be more powerful and solves vanishing gradient problem.</p> <p>The motivation of these fusion of these advantageous ideas of CapsNet and ResNet with Squeeze and Excite (SE) Block from Squeeze and Excite Network, this research work presents SE-Residual Capsule Network (SE-RCN), an efficient neural network model. The proposed model, replaces the traditional convolutional layer of CapsNet with skip connections and SE Block to lower the complexity of the CapsNet. The performance of the model is demonstrated on the well known datasets like MNIST and CIFAR-10 and a substantial reduction in the number of training parameters is observed in comparison to similar neural networks. The proposed SE-RCN produces 6.37 Million parameters with an accuracy of 99.71% on the MNIST dataset and on CIFAR-10 dataset it produces 10.55 Million parameters with 83.86% accuracy.</p>
540

Incorporating Histograms of Oriented Gradients Into Monte Carlo Localization

Norris, Michael K 01 June 2016 (has links) (PDF)
This work presents improvements to Monte Carlo Localization (MCL) for a mobile robot using computer vision. Solutions to the localization problem aim to provide fine resolution on location approximation, and also be resistant to changes in the environment. One such environment change is the kidnapped/teleported robot problem, where a robot is suddenly transported to a new location and must re-localize. The standard method of "Augmented MCL" uses particle filtering combined with addition of random particles under certain conditions to solve the kidnapped robot problem. This solution is robust, but not always fast. This work combines Histogram of Oriented Gradients (HOG) computer vision with particle filtering to speed up the localization process. The major slowdown in Augmented MCL is the conditional addition of random particles, which depends on the ratio of a short term and long term average of particle weights. This ratio does not change quickly when a robot is kidnapped, leading the robot to believe it is in the wrong location for a period of time. This work replaces this average-based conditional with a comparison of the HOG image directly in front of the robot with a cached version. This resulted in a speedup ranging from from 25.3% to 80.7% (depending on parameters used) in localization time over the baseline Augmented MCL.

Page generated in 0.0547 seconds