• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 296
  • 42
  • 18
  • 13
  • 10
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 476
  • 476
  • 220
  • 194
  • 158
  • 120
  • 103
  • 101
  • 98
  • 77
  • 66
  • 65
  • 65
  • 64
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

To Detect Water-Puddle On Driving Terrain From RGB Imagery Using Deep Learning Algorithms

Muske, Manideep Sai Yadav January 2021 (has links)
Background: With the emerging application of autonomous vehicles in the automotive industry, several efforts have been made for the complete adoption of autonomous vehicles. One of the several problems in creating autonomous technology is the detection of water puddles, which can cause damages to internal components and the vehicle to lose control. This thesis focuses on the detection of water puddles on-road and off-road conditions with the use of Deep Learning models. Objectives: The thesis focuses on finding suitable Deep Learning algorithms for detecting the water puddles, and then an experiment is performed with the chosen algorithms. The algorithms are then compared with each other based on the performance evaluation of the trained models. Methods: The study uses a literature review to find the appropriate Deep Learning algorithms to answer the first research question, followed by conducting an experiment to compare and evaluate the selected algorithms. Metrics used to compare the algorithm include accuracy, precision, recall, f1 score, training time, and detection speed. Results: The Literature Review indicated Faster R-CNN and SSD are suitable algorithms for object detection applications. The experimental results indicated that on the basis of accuracy, recall, and f1 score, the Faster R-CNN is a better performing algorithm. But on the basis of precision, training time, and detection speed, the SSD is a faster performing algorithm. Conclusions: After carefully analyzing the results, Faster R-CNN is preferred for its better performance due to the fact that in a real-life scenario which the thesis aims at, the models to correctly predict the water puddles is key
152

Quality Control: Detect Visual Defects on Products Using Image Processing and Deep Learning

Pettersson, Isac, Skäremo, Johan January 2023 (has links)
Computer vision, a prominent subfield of artificial intelligence, has gained widespread util-ization in diverse domains such as surveillance, security, and robotics. This research en-deavors to develop an semi-automated defect detection system serving as a quality controlassurance mechanism for Nolato MediTor, a manufacturing company within the medicaldevice industries engaged in the production of anesthesia breathing bags. The primary fo-cus of this study revolves around the detection of a specific defect, namely, holes. Withinthe context of Nolato MediTor, prioritizing recall (sensitivity) assumes utmost signific-ance as it entails favoring the rejection of functional breathing bags over the inadvertentacceptance of defective ones. The proposed system encompasses a robust metallic standfacilitating precise positioning for three distinct camera angles, accompanied by a XiaomiRedmi Note 11 Pro phone and a software component, designed to process incoming imagefolders representing a complete view of a breathing bag from multiple angles. Subsequently,these images undergo analysis using the learned weights derived from the implementedMask R-CNN model, enabling a cohesive assessment of the breathing bag. The system’sperformance was rigorously evaluated, and the best-performing weights demonstrated aremarkable recall rate of 0.995 for the first test set, exceeding the desired recall thresholdof 95%. Similarly, for the second test set, the recall rate achieved an impressive value of0.949, narrowly missing the 95% threshold by a marginal 0.001. Furthermore, the com-putational efficiency, quantified as the processing time per breathing bag, on average, thelongest duration recorded amounted to approximately 10.151 seconds, with the poten-tial for further enhancement by employing a higher standard GPU. This study serves as aproof of concept, demonstrating the feasibility of achieving semi-automated quality controlutilizing CNN. The implemented system represents a promising prototype with potentialscalability for improved operational conditions and expanded defect coverage, thus pavingthe way towards a fully automated quality control within large-scale industries.
153

Police Car 'Visibility': He Relationship between Detection, Categorization and Visual Saliency

Thomas, Mark Dewayne 12 May 2012 (has links)
Perceptual categorization involves integrating bottom-up sensory information with top-down knowledge which is based on prior experience. Bottom-up information comes from the external world and visual saliency is a type of bottom-up information that is calculated on the differences between the visual characteristics of adjacent spatial locations. There is currently a related debate in municipal law enforcement communities about which are more ‘visible’: white police cars or black and white police cars. Municipalities do not want police cars to be hit by motorists and they also want police cars to be seen in order to promote a public presence. The present study used three behavioral experiments to investigate the effects of visual saliency on object detection and categorization. Importantly, the results indicated that so-called ‘object detection’ is not a valid construct. Rather than identifying objectness or objecthood prior to categorization, object categorization is an obligatory process, and object detection is a postcategorization decision with higher salience objects being categorized easier than lower salience objects. An additional experiment was conducted to examine the features that constitute a police car. Based on salience alone, black and white police cars were better categorized than white police cars and light bars were slightly more important police car defining components than markings.
154

DRIVING-SCENE IMAGE CLASSIFICATION USING DEEP LEARNING NETWORKS: YOLOV4 ALGORITHM

Rahman, Muhammad Tamjid January 2022 (has links)
The objective of the thesis is to explore an approach of classifying and localizing different objects from driving-scene images using YOLOv4 algorithm trained on custom dataset.  YOLOv4, a one-stage object detection algorithm, aims to have better accuracy and speed. The deep learning (convolutional) network based classification model was trained and validated on a subject of SODA10M dataset annotated with six different classes of objects (Car, Cyclist, Truck, Bus, Pedestrian, and Tricycle), which are the most seen objects on the road. Another model based on YOLOv3 (the previous version of YOLOv4) will be trained on the same dataset and the performance will be compared with the YOLOv4 model. Both algorithms are fast but have difficulty detecting some objects, especially the small objects. Larger quantities of properly annotated training data can improve the algorithm's performance accuracy.
155

Camera Based Deep Learning Algorithms with Transfer Learning in Object Perception

Hu, Yujie January 2021 (has links)
The perception system is the key for autonomous vehicles to sense and understand the surrounding environment. As the cheapest and most mature sensor, monocular cameras create a rich and accurate visual representation of the world. The objective of this thesis is to investigate if camera-based deep learning models with transfer learning technique can achieve 2D object detection, License Plate Detection and Recognition (LPDR), and highway lane detection in real time. The You Only Look Once version 3 (YOLOv3) algorithm with and without transfer learning is applied on the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset for cars, cyclists, and pedestrians detection. This application shows that objects could be detected in real time and the transfer learning boosts the detection performance. The Convolutional Recurrent Neural Network (CRNN) algorithm with a pre-trained model is applied on multiple License Plate (LP) datasets for real-time LP recognition. The optimized model is then used to recognize Ontario LPs and achieves high accuracy. The Efficient Residual Factorized ConvNet (ERFNet) algorithm with transfer learning and a cubic spline model are modified and implemented on the TuSimple dataset for lane segmentation and interpolation. The detection performance and speed are comparable with other state-of-the-art algorithms. / Thesis / Master of Applied Science (MASc)
156

AUTONOMOUS SAFE LANDING ZONE DETECTION FOR UAVs UTILIZING MACHINE LEARNING

Nepal, Upesh 01 May 2022 (has links)
One of the main challenges of the integration of unmanned aerial vehicles (UAVs) into today’s society is the risk of in-flight failures, such as motor failure, occurring in populated areas that can result in catastrophic accidents. We propose a framework to manage the consequences of an in-flight system failure and to bring down the aircraft safely without causing any serious accident to people, property, and the UAV itself. This can be done in three steps: a) Detecting a failure, b) Finding a safe landing spot, and c) Navigating the UAV to the safe landing spot. In this thesis, we will look at part b. Specifically, we are working to develop an active system that can detect landing sites autonomously without any reliance on UAV resources. To detect a safe landing site, we are using a deep learning algorithm named "You Only Look Once" (YOLO) that runs on a Jetson Xavier NX computing module, which is connected to a camera, for image processing. YOLO is trained using the DOTA dataset and we show that it can detect landing spots and obstacles effectively. Then by avoiding the detected objects, we find a safe landing spot. The effectiveness of this algorithm will be shown first by comprehensive simulations. We also plan to experimentally validate this algorithm by flying a UAV and capturing ground images, and then applying the algorithm in real-time to see if it can effectively detect acceptable landing spots.
157

Automated Foreign Object Detection on Conveyor Belts

Sundelius, Kim January 2023 (has links)
Ore is transported using belt conveyor systems. The transported ore has various anomalous objects that must be removed to prevent damage to the system. Currently anomalies are detected manually using humans. This leads to increased costs of wages and damage to the system overmissed anomalies. The thesis aims to solve this problem via the use of trained neural networks which can run on relatively cheap systems with a greater accuracy than humans. A set of neural networks were trained on both the BCS dataset consisting of data collected from the belt conveyor system and on the MVTec dataset. The latter dataset was used as a way of checking the correctness of the implementation of the models. As training neural networks usually requires large datasets, this thesis also focuses on the effect of the portion of labelled versus unlabelled data on the models. Labelling data can be time consuming and expensive so investigating if or how much data can be unlabelled without any or minimal loss to accuracy could lead to further cost reductions. The convolutional autoencoder (CAE) performed best on the classification based task on the BCS dataset where it managed to classify most of the dataset correctly, with an F1-score of 0.94 on data without anomalies and an F1-score of 0.86 on data with anomalies, as long as suitable thresholds were set. ResNet performed somewhat well with a 0.91 F1-score in detecting anomaly free data and a 0.50 F1-score in detecting anomaly containing data. The SimCLR and SimCLRv2 models were unable to learn from the data and defaulted to always assuming the data contained anomalies. The CAE model trained using the L1 loss function performed best with an IoU of 0.272 and performed worst with the SSIM based loss function with an IoU of 0.160. The effect of labelled versus unlabelled data using the MVTec dataset was tested using the SimCLR and SimCLRv2 models and the models performed best with the fully labelled dataset which was expected. The SimCLR model was able to identify all categories with an F1-score greater than 0.67 whereas the other splits performed worse overall with two or more categories completely misclassified. The SimCLRv2 was able to classify six categories with an F1-score greater than 0.0 which was significantly better than all other labelled and unlabelled splits.
158

Object Detection and Classification Based on Point Separation Distance Features of Point Cloud Data

Ji, Jiajie 07 August 2023 (has links)
No description available.
159

Enhancing Object Detection Methods by Knowledge Distillation for Automotive Driving in Real-World Settings

Kian, Setareh 07 August 2023 (has links)
No description available.
160

Image Analysis For Plant Phenotyping

Enyu Cai (15533216) 17 May 2023 (has links)
<p>Plant phenotyping focuses on the measurement of plant characteristics throughout the growing season, typically with the goal of evaluating genotypes for plant breeding and management practices related to nutrient applications. Estimating plant characteristics is important for finding the relationship between the plant's genetic data and observable traits, which is also related to the environment and management practices. Recent machine learning approaches provide promising capabilities for high-throughput plant phenotyping using images. In this thesis, we focus on estimating plant traits for a field-based crop using images captured by Unmanned Aerial Vehicles (UAVs). We propose a method for estimating plant centers by transferring an existing model to a new scenario using limited ground truth data. We describe the use of transfer learning using a model fine-tuned for a single field or a single type of plant on a varied set of similar crops and fields. We introduce a method for rapidly counting panicles using images acquired by UAVs. We evaluate three different deep neural network structures for panicle counting and location. We propose a method for sorghum flowering time estimation using multi-temporal panicle counting. We present an approach that uses synthetic training images from generative adversarial networks for data augmentation to enhance the performance of sorghum panicle detection and counting. We reduce the amount of training data for sorghum panicle detection via semi-supervised learning. We create synthetic sorghum and maize images using diffusion models. We propose a method for tomato plant segmentation by color correction and color space conversion. We also introduce the methods for detecting and classifying bacterial tomato wilting from images.</p>

Page generated in 0.0504 seconds