Spelling suggestions: "subject:"abject detection."" "subject:"6bject detection.""
161 |
Use of Thermal Imagery for Robust Moving Object DetectionBergenroth, Hannah January 2021 (has links)
This work proposes a system that utilizes both infrared and visual imagery to create a more robust object detection and classification system. The system consists of two main parts: a moving object detector and a target classifier. The first stage detects moving objects in visible and infrared spectrum using background subtraction based on Gaussian Mixture Models. Low-level fusion is performed to combine the foreground regions in the respective domain. For the second stage, a Convolutional Neural Network (CNN), pre-trained on the ImageNet dataset is used to classify the detected targets into one of the pre-defined classes; human and vehicle. The performance of the proposed object detector is evaluated using multiple video streams recorded in different areas and under various weather conditions, which form a broad basis for testing the suggested method. The accuracy of the classifier is evaluated from experimentally generated images from the moving object detection stage supplemented with publicly available CIFAR-10 and CIFAR-100 datasets. The low-level fusion method shows to be more effective than using either domain separately in terms of detection results. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
|
162 |
Intelligent Collision Prevention System For SPECT Detectors by Implementing Deep Learning Based Real-Time Object DetectionTahrir Ibraq Siddiqui (11173185) 23 July 2021 (has links)
<p>The SPECT-CT machines manufactured by Siemens consists of
two heavy detector heads(~1500lbs each) that are moved into various
configurations for radionuclide imaging. These detectors are driven by large
torque powered by motors in the gantry that enable linear and rotational motion.
If the detectors collide with large objects – stools, tables, patient
extremities, etc. – they are very likely to damage the objects and get damaged
as well. <a>This research work proposes an intelligent
real-time object detection system to prevent collisions</a> between detector
heads and external objects in the path of the detector’s motion by implementing
an end-to-end deep learning object detector. The research extensively documents
all the work done in identifying the most suitable object detection framework
for this use case, collecting, and processing the image dataset of target
objects, training the deep neural net to detect target objects, deploying the
trained deep neural net in live demos by implementing a real-time object
detection application written in Python, improving the model’s performance, and
finally investigating methods to stop detector motion upon detecting external
objects in the collision region. We successfully demonstrated that a <i>Caffe</i>
version of <i>MobileNet-SSD </i>can be trained and deployed to detect target
objects entering the collision region in real-time by following the
methodologies outlined in this paper. We then laid out the future work that
must be done in order to bring this system into production, such as training
the model to detect all possible objects that may be found in the collision
region, controlling the activation of the RTOD application, and efficiently
stopping the detector motion.</p>
|
163 |
Code FilesTahrir Ibraq Siddiqui (11173185) 23 July 2021 (has links)
1) real_time_object_detection.py: Python script for deploying trained deep neural network in live stream.<br>2) augmentation.py: Python script for augmenting Detector images.<div>3) tcp_send_command.py: Python script for sending system stop CPI command to Gateway as a CPI message.</div>
|
164 |
Demos after First Training RunTahrir Ibraq Siddiqui (11173185) 23 July 2021 (has links)
Demos of deploying caffemodel trained for 16000 iterations after the initial training session in the three scenarios outlined in the paper and a minimum confidence score of 30% for detections.
|
165 |
Combo 5 and Combo 15 DemosTahrir Ibraq Siddiqui (11173185) 23 July 2021 (has links)
Demos of deploying combo 5 caffemodel trained for 18000 iterations and combo 15 caffemodel trained for 25000 iterations.
|
166 |
Computer vision-based systems for environmental monitoring applicationsPorto Marques, Tunai 12 April 2022 (has links)
Environmental monitoring refers to a host of activities involving the sampling or sensing of diverse properties from an environment in an effort to monitor, study and overall better understand it. While potentially rich and scientifically valuable, these data often create challenging interpretation tasks because of their volume and complexity. This thesis explores the efficiency of Computer Vision-based frameworks towards the processing of large amounts of visual environmental monitoring data.
While considering every potential type of visual environmental monitoring measurement is not possible, this thesis elects three data streams as representatives of diverse monitoring layouts: visual out-of-water stream, visual underwater stream and active acoustic underwater stream. Detailed structure, objectives, challenges, solutions and insights from each of them are presented and used to assess the feasibility of Computer Vision within the environmental monitoring context. This thesis starts by providing an in-depth analysis of the definition and goals of environmental monitoring, as well as the Computer Vision systems typically used in conjunction with it.
The document continues by studying the visual out-of-water stream via the design of a novel system employing a contrast-guided approach towards the enhancement of low-light underwater images. This enhancement system outperforms multiple state-of-the-art methods, as supported by a group of commonly-employed metrics.
A pair of detection frameworks capable of identifying schools of herring, salmon, hake and swarms of krill are also presented in this document. The inputs used in their development, echograms, are visual representations of acoustic backscatter data from echosounder instruments, thus contemplating the active acoustic underwater stream. These detectors use different Deep Learning paradigms to account for the unique challenges presented by each pelagic species. Specifically, the detection of krill and finfish is accomplish with a novel semantic segmentation network (U-MSAA-Net) capable of leveraging local and contextual information from feature maps of multiple scales.
In order to explore the out-of-water visual data stream, we examine a large dataset composed by years-worth of images from a coastal region with strong marine vessels traffic, which has been associated with significant anthropogenic footprints upon marine environments. A novel system that involves ``traditional'' Computer Vision and Deep Learning is proposed for the identification of such vessels under diverse visual appearances on this monitoring imagery. Thorough experimentation shows that this system is able to efficiently detect vessels of diverse sizes, shapes, colors and levels of visibility.
The results and reflections presented in this thesis reinforce the hypothesis that Computer Vision offers an extremely powerful set of methods for the automatic, accurate, time- and space-efficient interpretation of large amounts of visual environmental monitoring data, as detailed in the remainder of this work. / Graduate
|
167 |
Detekcija bolesti biljaka tehnikama dubokog učenja / Plant disease detections using deep learning techniquesArsenović Marko 07 October 2020 (has links)
<p>Istraživanja predstavljena u disertaciji imala su za cilj razvoj nove metode bazirane na dubokim konvolucijskim neuoronskim mrežama u cilju detekcije bolesti biljaka na osnovu slike lista. U okviru eksperimentalnog dela rada prikazani su dosadašnji literaturno dostupni pristupi u automatskoj detekciji bolesti biljaka kao i ograničenja ovako dobijenih modela kada se koriste u prirodnim uslovima. U okviru disertacije uvedena je nova baza slika listova, trenutno najveća po broju slika u poređenju sa javno dostupnim bazama, potvrđeni su novi pristupi augmentacije bazirani na GAN arhitekturi nad slikama listova uz novi specijalizovani dvo-koračni pristup kao potencijalni odgovor na nedostatke postojećih rešenja.</p> / <p>The research presented in this thesis was aimed at developing a novel method based on deep convolutional neural networks for automated plant disease detection. Based on current available literature, specialized two-phased deep neural network method introduced in the experimental part of thesis solves the limitations of state-of-the-art plant disease detection methods and provides the possibility for a practical usage of the newly developed model. In addition, a new dataset was introduced, that has more images of leaves than other publicly available datasets, also GAN based augmentation approach on leaves images is experimentally confirmed.</p>
|
168 |
Indoor 3D Scene Understanding Using Depth SensorsLahoud, Jean 09 1900 (has links)
One of the main goals in computer vision is to achieve a human-like understanding of images. Nevertheless, image understanding has been mainly studied in the 2D image frame, so more information is needed to relate them to the 3D world. With the emergence of 3D sensors (e.g. the Microsoft Kinect), which provide depth along with color information, the task of propagating 2D knowledge into 3D becomes more attainable and enables interaction between a machine (e.g. robot) and its environment. This dissertation focuses on three aspects of indoor 3D scene understanding: (1) 2D-driven 3D object detection for single frame scenes with inherent 2D information, (2) 3D object instance segmentation for 3D reconstructed scenes, and (3) using room and floor orientation for automatic labeling of indoor scenes that could be used for self-supervised object segmentation. These methods allow capturing of physical extents of 3D objects, such as their sizes and actual locations within a scene.
|
169 |
Real-time vehicle and pedestrian detection, a data-driven recommendation focusing on safety as a perception to autonomous vehiclesVlahija, Chippen, Abdulkader, Ahmed January 2020 (has links)
Object detection exists in many countries around the world after a recent growing interest for autonomous vehicles in the last decade. This paper focuses on a vision-based approach focusing on vehicles and pedestrians detection in real-time as a perception for autonomous vehicles, using a convolutional neural network for object detection. A developed YOLOv3-tiny model is trained with the INRIA dataset to detect vehicles and pedestrians, and the model also measures the distance to the detected objects. The machine learning process is leveraged to describe each step of the training process, it also combats overfitting and increases the speed and accuracy. The authors were able to increase the mean average precision; a way to measure accuracy for object detectors; 31.3\% to 62.14\% based on the result of the training that was done. Whilst maintaining a speed of 18 frames per second.
|
170 |
Cross Platform Training of Neural Networks to Enable Object Identification by Autonomous VehiclesJanuary 2019 (has links)
abstract: Autonomous vehicle technology has been evolving for years since the Automated Highway System Project. However, this technology has been under increased scrutiny ever since an autonomous vehicle killed Elaine Herzberg, who was crossing the street in Tempe, Arizona in March 2018. Recent tests of autonomous vehicles on public roads have faced opposition from nearby residents. Before these vehicles are widely deployed, it is imperative that the general public trusts them. For this, the vehicles must be able to identify objects in their surroundings and demonstrate the ability to follow traffic rules while making decisions with human-like moral integrity when confronted with an ethical dilemma, such as an unavoidable crash that will injure either a pedestrian or the passenger.
Testing autonomous vehicles in real-world scenarios would pose a threat to people and property alike. A safe alternative is to simulate these scenarios and test to ensure that the resulting programs can work in real-world scenarios. Moreover, in order to detect a moral dilemma situation quickly, the vehicle should be able to identify objects in real-time while driving. Toward this end, this thesis investigates the use of cross-platform training for neural networks that perform visual identification of common objects in driving scenarios. Here, the object detection algorithm Faster R-CNN is used. The hypothesis is that it is possible to train a neural network model to detect objects from two different domains, simulated or physical, using transfer learning. As a proof of concept, an object detection model is trained on image datasets extracted from CARLA, a virtual driving environment, via transfer learning. After bringing the total loss factor to 0.4, the model is evaluated with an IoU metric. It is determined that the model has a precision of 100% and 75% for vehicles and traffic lights respectively. The recall is found to be 84.62% and 75% for the same. It is also shown that this model can detect the same classes of objects from other virtual environments and real-world images. Further modifications to the algorithm that may be required to improve performance are discussed as future work. / Dissertation/Thesis / Masters Thesis Mechanical Engineering 2019
|
Page generated in 0.114 seconds