• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 334
  • 42
  • 19
  • 13
  • 10
  • 8
  • 4
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 522
  • 522
  • 239
  • 201
  • 162
  • 128
  • 109
  • 108
  • 104
  • 86
  • 84
  • 76
  • 73
  • 72
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Code Files

Tahrir Ibraq Siddiqui (11173185) 23 July 2021 (has links)
1) real_time_object_detection.py: Python script for deploying trained deep neural network in live stream.<br>2) augmentation.py: Python script for augmenting Detector images.<div>3) tcp_send_command.py: Python script for sending system stop CPI command to Gateway as a CPI message.</div>
162

Demos after First Training Run

Tahrir Ibraq Siddiqui (11173185) 23 July 2021 (has links)
Demos of deploying caffemodel trained for 16000 iterations after the initial training session in the three scenarios outlined in the paper and a minimum confidence score of 30% for detections.
163

Combo 5 and Combo 15 Demos

Tahrir Ibraq Siddiqui (11173185) 23 July 2021 (has links)
Demos of deploying combo 5 caffemodel trained for 18000 iterations and combo 15 caffemodel trained for 25000 iterations.
164

Computer vision-based systems for environmental monitoring applications

Porto Marques, Tunai 12 April 2022 (has links)
Environmental monitoring refers to a host of activities involving the sampling or sensing of diverse properties from an environment in an effort to monitor, study and overall better understand it. While potentially rich and scientifically valuable, these data often create challenging interpretation tasks because of their volume and complexity. This thesis explores the efficiency of Computer Vision-based frameworks towards the processing of large amounts of visual environmental monitoring data. While considering every potential type of visual environmental monitoring measurement is not possible, this thesis elects three data streams as representatives of diverse monitoring layouts: visual out-of-water stream, visual underwater stream and active acoustic underwater stream. Detailed structure, objectives, challenges, solutions and insights from each of them are presented and used to assess the feasibility of Computer Vision within the environmental monitoring context. This thesis starts by providing an in-depth analysis of the definition and goals of environmental monitoring, as well as the Computer Vision systems typically used in conjunction with it. The document continues by studying the visual out-of-water stream via the design of a novel system employing a contrast-guided approach towards the enhancement of low-light underwater images. This enhancement system outperforms multiple state-of-the-art methods, as supported by a group of commonly-employed metrics. A pair of detection frameworks capable of identifying schools of herring, salmon, hake and swarms of krill are also presented in this document. The inputs used in their development, echograms, are visual representations of acoustic backscatter data from echosounder instruments, thus contemplating the active acoustic underwater stream. These detectors use different Deep Learning paradigms to account for the unique challenges presented by each pelagic species. Specifically, the detection of krill and finfish is accomplish with a novel semantic segmentation network (U-MSAA-Net) capable of leveraging local and contextual information from feature maps of multiple scales. In order to explore the out-of-water visual data stream, we examine a large dataset composed by years-worth of images from a coastal region with strong marine vessels traffic, which has been associated with significant anthropogenic footprints upon marine environments. A novel system that involves ``traditional'' Computer Vision and Deep Learning is proposed for the identification of such vessels under diverse visual appearances on this monitoring imagery. Thorough experimentation shows that this system is able to efficiently detect vessels of diverse sizes, shapes, colors and levels of visibility. The results and reflections presented in this thesis reinforce the hypothesis that Computer Vision offers an extremely powerful set of methods for the automatic, accurate, time- and space-efficient interpretation of large amounts of visual environmental monitoring data, as detailed in the remainder of this work. / Graduate
165

Detekcija bolesti biljaka tehnikama dubokog učenja / Plant disease detections using deep learning techniques

Arsenović Marko 07 October 2020 (has links)
<p>Istraživanja predstavljena u disertaciji imala su za cilj razvoj nove metode bazirane na dubokim konvolucijskim neuoronskim mrežama u cilju detekcije bolesti biljaka na osnovu slike lista. U okviru eksperimentalnog dela rada prikazani su dosadašnji literaturno dostupni pristupi u automatskoj detekciji bolesti biljaka kao i ograničenja ovako dobijenih modela kada se koriste u prirodnim uslovima. U okviru disertacije uvedena je nova baza slika listova, trenutno najveća po broju slika u poređenju sa javno dostupnim bazama, potvrđeni su novi pristupi augmentacije bazirani na GAN arhitekturi nad slikama listova uz novi specijalizovani dvo-koračni pristup kao potencijalni odgovor na nedostatke postojećih rešenja.</p> / <p>The research presented in this thesis was aimed at developing a novel method based on deep convolutional neural networks for automated plant disease detection. Based on current available literature, specialized two-phased deep neural network method introduced in the experimental part of thesis solves the limitations of state-of-the-art plant disease detection methods and provides the possibility for a practical usage of the newly developed model. In addition, a new dataset was introduced, that has more images of leaves than other publicly available datasets, also GAN based augmentation approach on leaves images is experimentally confirmed.</p>
166

Indoor 3D Scene Understanding Using Depth Sensors

Lahoud, Jean 09 1900 (has links)
One of the main goals in computer vision is to achieve a human-like understanding of images. Nevertheless, image understanding has been mainly studied in the 2D image frame, so more information is needed to relate them to the 3D world. With the emergence of 3D sensors (e.g. the Microsoft Kinect), which provide depth along with color information, the task of propagating 2D knowledge into 3D becomes more attainable and enables interaction between a machine (e.g. robot) and its environment. This dissertation focuses on three aspects of indoor 3D scene understanding: (1) 2D-driven 3D object detection for single frame scenes with inherent 2D information, (2) 3D object instance segmentation for 3D reconstructed scenes, and (3) using room and floor orientation for automatic labeling of indoor scenes that could be used for self-supervised object segmentation. These methods allow capturing of physical extents of 3D objects, such as their sizes and actual locations within a scene.
167

Real-time vehicle and pedestrian detection, a data-driven recommendation focusing on safety as a perception to autonomous vehicles

Vlahija, Chippen, Abdulkader, Ahmed January 2020 (has links)
Object detection exists in many countries around the world after a recent growing interest for autonomous vehicles in the last decade. This paper focuses on a vision-based approach focusing on vehicles and pedestrians detection in real-time as a perception for autonomous vehicles, using a convolutional neural network for object detection. A developed YOLOv3-tiny model is trained with the INRIA dataset to detect vehicles and pedestrians, and the model also measures the distance to the detected objects. The machine learning process is leveraged to describe each step of the training process, it also combats overfitting and increases the speed and accuracy. The authors were able to increase the mean average precision; a way to measure accuracy for object detectors; 31.3\% to 62.14\% based on the result of the training that was done. Whilst maintaining a speed of 18 frames per second.
168

Cross Platform Training of Neural Networks to Enable Object Identification by Autonomous Vehicles

January 2019 (has links)
abstract: Autonomous vehicle technology has been evolving for years since the Automated Highway System Project. However, this technology has been under increased scrutiny ever since an autonomous vehicle killed Elaine Herzberg, who was crossing the street in Tempe, Arizona in March 2018. Recent tests of autonomous vehicles on public roads have faced opposition from nearby residents. Before these vehicles are widely deployed, it is imperative that the general public trusts them. For this, the vehicles must be able to identify objects in their surroundings and demonstrate the ability to follow traffic rules while making decisions with human-like moral integrity when confronted with an ethical dilemma, such as an unavoidable crash that will injure either a pedestrian or the passenger. Testing autonomous vehicles in real-world scenarios would pose a threat to people and property alike. A safe alternative is to simulate these scenarios and test to ensure that the resulting programs can work in real-world scenarios. Moreover, in order to detect a moral dilemma situation quickly, the vehicle should be able to identify objects in real-time while driving. Toward this end, this thesis investigates the use of cross-platform training for neural networks that perform visual identification of common objects in driving scenarios. Here, the object detection algorithm Faster R-CNN is used. The hypothesis is that it is possible to train a neural network model to detect objects from two different domains, simulated or physical, using transfer learning. As a proof of concept, an object detection model is trained on image datasets extracted from CARLA, a virtual driving environment, via transfer learning. After bringing the total loss factor to 0.4, the model is evaluated with an IoU metric. It is determined that the model has a precision of 100% and 75% for vehicles and traffic lights respectively. The recall is found to be 84.62% and 75% for the same. It is also shown that this model can detect the same classes of objects from other virtual environments and real-world images. Further modifications to the algorithm that may be required to improve performance are discussed as future work. / Dissertation/Thesis / Masters Thesis Mechanical Engineering 2019
169

Object Recognition Using Scale-Invariant Chordiogram

Tonge, Ashwini 05 1900 (has links)
This thesis describes an approach for object recognition using the chordiogram shape-based descriptor. Global shape representations are highly susceptible to clutter generated due to the background or other irrelevant objects in real-world images. To overcome the problem, we aim to extract precise object shape using superpixel segmentation, perceptual grouping, and connected components. The employed shape descriptor chordiogram is based on geometric relationships of chords generated from the pairs of boundary points of an object. The chordiogram descriptor applies holistic properties of the shape and also proven suitable for object detection and digit recognition mechanisms. Additionally, it is translation invariant and robust to shape deformations. In spite of such excellent properties, chordiogram is not scale-invariant. To this end, we propose scale invariant chordiogram descriptors and intend to achieve a similar performance before and after applying scale invariance. Our experiments show that we achieve similar performance with and without scale invariance for silhouettes and real world object images. We also show experiments at different scales to confirm that we obtain scale invariance for chordiogram.
170

Deep Learning-based Hazardous Materials Detection Algorithm

WU, SHUANG 25 January 2022 (has links)
No description available.

Page generated in 0.1128 seconds