• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 325
  • 42
  • 19
  • 13
  • 10
  • 8
  • 4
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 510
  • 510
  • 232
  • 194
  • 157
  • 123
  • 106
  • 106
  • 103
  • 81
  • 81
  • 74
  • 73
  • 70
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

AUTOMATED WEED DETECTION USING MACHINE LEARNING TECHNIQUES ON UAS-ACQUIRED IMAGERY

Aaron Etienne (6570041) 13 August 2019 (has links)
<p>Current methods of broadcast herbicide application cause a negative environmental and economic impact. Computer vision methods, specifically those related to object detection, have been reported to aid in site-specific weed management procedures to target apply herbicide on per-weed basis within a field. However, a major challenge to developing a weed detection system is the requirement for properly annotated training data to differentiate between weeds and crops under field conditions. This research involved creating an annotated database of weeds by using UAS-acquired imagery from corn and soybean research plots located in North-central Indiana. A total of 27,828 RGB; 108,398 multispectral; and 23,628 thermal images, were acquired using FLIR Duo Pro R sensor that was attached to a DJI Matrice 600 Pro UAS. An annotated database of 306 RGB images, organized into monocot and dicot weed classes, was used for network training. Two Deep Learning networks namely, DetectNet and You Only Look Once version 3 (YOLO ver3) were subjected to five training stages using four annotated image sets. The precision for weed detection ranged between 3.63-65.37% for monocot and 4.22-45.13% for dicot weed detection. This research has demonstrated a need for creating a large annotated weed database for improving precision of deep learning algorithms through better training of the network.</p>
32

Computer vision as a tool for forestry / Datorseende som ett verktyg för skogsbruket

Bång, Filip January 2019 (has links)
Forestry is a large industry in Sweden and methods have been developed to try to optimize the processes in the business. Yet computer vision has not been used to a large extent despite other industries using computer vision with success. Computer vision is a sub area of machine learning and has become popular thanks to advancements in the field of machine learning. This project plans to  investigate how some of the architectures used in computer vision perform when applied in the context of forestry. In this project four architectures were selected that have previously proven to perform well on a general dataset. These four architectures were configured to continue to train on trees and other objects in the forest. The trained architectures were tested by measuring frames per second (FPS) when performing object detection on a video and mean average precision (mAP) which is a measure of how well a trained architecture detects objects. The fastest one was an architecture using a Single Shot Detector together with MobileNet v2 as a base network achieving 29 FPS. The one with the best accuracy was using Faster R-CNN and Inception Resnet as a base network achieving 0.119 mAP on the test set. The overall bad mAP for the trained architectures resulted in that none of the architectures were considered to be useful in a real world scenario as is. Suggestions on how to improve the mAP is focused on improvements on the dataset.
33

Real time object detection on a Raspberry Pi / Objektdetektering i realtid på en Raspberry Pi

Gunnarsson, Adam January 2019 (has links)
With the recent advancement of deep learning, the performance of object detection techniques has greatly increased in both speed and accuracy. This has made it possible to run highly accurate object detection with real time speed on modern desktop computer systems. Recently, there has been a growing interest in developing smaller and faster deep neural network architectures suited for embedded devices. This thesis explores the suitability of running object detection on the Raspberry Pi 3, a popular embedded computer board. Two controlled experiments are conducted where two state of the art object detection models SSD and YOLO are tested in how they perform in accuracy and speed. The results show that the SSD model slightly outperforms YOLO in both speed and accuracy, but with the low processing power that the current generation of Raspberry Pi has to offer, none of the two performs well enough to be viable in applications where high speed is necessary.
34

Hypermaps : Beyond occupancy grids

Zaenker, Tobias January 2019 (has links)
Intelligent and autonomous robotic applications often require robots to have more information about their environment than provided by traditional occupancy maps. An example are semantic maps, which provide qualitative descriptions of the environment. While research in the area of semantic mapping has been performed, most robotic frameworks still offer only occupancy maps. In this thesis, a framework is developed to handle multi-layered 2D maps in ROS. The framework offers occupancy and semantic layers, but can be extended with new layer types in the future. Furthermore, an algorithm to automatically generate semantic maps from RGB-D images is presented. Software tests were performed to check if the framework fulfills all set requirements. It was shown that the requirements are accomplished. Furthermore, the semantic mapping algorithm was evaluated with different configurations in two test environments, a laboratory and a floor. While the object shapes of the generated semantic maps were not always accurate and some false detections occurred, most objects were successfully detected and placed on the semantic map. Possible ways to improve the accuracy of the mapping in the future are discussed.
35

Virtual image sensors to track human activity in a smart house

Tun, Min Han January 2007 (has links)
With the advancement of computer technology, demand for more accurate and intelligent monitoring systems has also risen. The use of computer vision and video analysis range from industrial inspection to surveillance. Object detection and segmentation are the first and fundamental task in the analysis of dynamic scenes. Traditionally, this detection and segmentation are typically done through temporal differencing or statistical modelling methods. One of the most widely used background modeling and segmentation algorithms is the Mixture of Gaussians method developed by Stauffer and Grimson (1999). During the past decade many such algorithms have been developed ranging from parametric to non-parametric algorithms. Many of them utilise pixel intensities to model the background, but some use texture properties such as Local Binary Patterns. These algorithms function quite well under normal environmental conditions and each has its own set of advantages and short comings. However, there are two drawbacks in common. The first is that of the stationary object problem; when moving objects become stationary, they get merged into the background. The second problem is that of light changes; when rapid illumination changes occur in the environment, these background modelling algorithms produce large areas of false positives. / These algorithms are capable of adapting to the change, however, the quality of the segmentation is very poor during the adaptation phase. In this thesis, a framework to suppress these false positives is introduced. Image properties such as edges and textures are utilised to reduce the amount of false positives during adaptation phase. The framework is built on the idea of sequential pattern recognition. In any background modelling algorithm, the importance of multiple image features as well as different spatial scales cannot be overlooked. Failure to focus attention on these two factors will result in difficulty to detect and reduce false alarms caused by rapid light change and other conditions. The use of edge features in false alarm suppression is also explored. Edges are somewhat more resistant to environmental changes in video scenes. The assumption here is that regardless of environmental changes, such as that of illumination change, the edges of the objects should remain the same. The edge based approach is tested on several videos containing rapid light changes and shows promising results. Texture is then used to analyse video images and remove false alarm regions. Texture gradient approach and Laws Texture Energy Measures are used to find and remove false positives. It is found that Laws Texture Energy Measure performs better than the gradient approach. The results of using edges, texture and different combination of the two in false positive suppression are also presented in this work. This false positive suppression framework is applied to a smart house senario that uses cameras to model ”virtual sensors” to detect interactions of occupants with devices. Results show the accuracy of virtual sensors compared with the ground truth is improved.
36

Fast Face Finding / Snabb ansiktsdetektering

Westerlund, Tomas January 2004 (has links)
<p>Face detection is a classical application of object detection. There are many practical applications in which face detection is the first step; face recognition, video surveillance, image database management, video coding. </p><p>This report presents the results of an implementation of the AdaBoost algorithm to train a Strong Classifier to be used for face detection. The AdaBoost algorithm is fast and shows a low false detection rate, two characteristics which are important for face detection algorithms. </p><p>The application is an implementation of the AdaBoost algorithm with several command-line executables that support testing of the algorithm. The training and detection algorithms are separated from the rest of the application by a well defined interface to allow reuse as a software library. </p><p>The source code is documented using the JavaDoc-standard, and CppDoc is then used to produce detailed information on classes and relationships in html format. </p><p>The implemented algorithm is found to produce relatively high detection rate and low false alarm rate, considering the badly suited training data used.</p>
37

Sharing visual features for multiclass and multiview object detection

Torralba, Antonio, Murphy, Kevin P., Freeman, William T. 14 April 2004 (has links)
We consider the problem of detecting a large number of different classes of objects in cluttered scenes. Traditional approaches require applying a battery of different classifiers to the image, at multiple locations and scales. This can be slow and can require a lot of training data, since each classifier requires the computation of many different image features. In particular, for independently trained detectors, the (run-time) computational complexity, and the (training-time) sample complexity, scales linearly with the number of classes to be detected. It seems unlikely that such an approach will scale up to allow recognition of hundreds or thousands of objects. We present a multi-class boosting procedure (joint boosting) that reduces the computational and sample complexity, by finding common features that can be shared across the classes (and/or views). The detectors for each class are trained jointly, rather than independently. For a given performance level, the total number of features required, and therefore the computational cost, is observed to scale approximately logarithmically with the number of classes. The features selected jointly are closer to edges and generic features typical of many natural structures instead of finding specific object parts. Those generic features generalize better and reduce considerably the computational cost of an algorithm for multi-class object detection.
38

Contextual Influences on Saliency

Torralba, Antonio 14 April 2004 (has links)
This article describes a model for including scene/context priors in attention guidance. In the proposed scheme, visual context information can be available early in the visual processing chain, in order to modulate the saliency of image regions and to provide an efficient short cut for object detection and recognition. The scene is represented by means of a low-dimensional global description obtained from low-level features. The global scene features are then used to predict the probability of presence of the target object in the scene, and its location and scale, before exploring the image. Scene information can then be used to modulate the saliency of image regions early during the visual processing in order to provide an efficient short cut for object detection and recognition.
39

Contextual models for object detection using boosted random fields

Torralba, Antonio, Murphy, Kevin P., Freeman, William T. 25 June 2004 (has links)
We seek to both detect and segment objects in images. To exploit both local image data as well as contextual information, we introduce Boosted Random Fields (BRFs), which uses Boosting to learn the graph structure and local evidence of a conditional random field (CRF). The graph structure is learned by assembling graph fragments in an additive model. The connections between individual pixels are not very informative, but by using dense graphs, we can pool information from large regions of the image; dense models also support efficient inference. We show how contextual information from other objects can improve detection performance, both in terms of accuracy and speed, by using a computational cascade. We apply our system to detect stuff and things in office and street scenes.
40

A Formulation for Active Learning with Applications to Object Detection

Sung, Kah Kay, Niyogi, Partha 06 June 1996 (has links)
We discuss a formulation for active example selection for function learning problems. This formulation is obtained by adapting Fedorov's optimal experiment design to the learning problem. We specifically show how to analytically derive example selection algorithms for certain well defined function classes. We then explore the behavior and sample complexity of such active learning algorithms. Finally, we view object detection as a special case of function learning and show how our formulation reduces to a useful heuristic to choose examples to reduce the generalization error.

Page generated in 0.0995 seconds