Spelling suggestions: "subject:"abject detection"" "subject:"6bject detection""
11 |
COMPACT AND COST-EFFECTIVE MOBILE 2.4 GHZ RADAR SYSTEM FOR OBJECT DETECTION AND TRACKINGSeongha Park (5930117) 17 January 2019 (has links)
Various types of small mobile objects such as recreational unmanned vehicles
have become easily approachable devices to the public because of technology
advancements. The technology advancements make it possible to manufacture
small, light, and easy to control unmanned vehicles, therefore the public are able to
handily access those unmanned vehicles. As the accessibility to unmanned vehicles
for recreational purposes, accidents or attacks to threat a person using those the
unmanned vehicles have been arising and growing rapidly. A specific person could
be a target of a threat using an unmanned vehicle in open public places due to its
small volume and mobility. Even though an unmanned vehicle approaches to a
person, it could be difficult to detect the unmanned vehicle before the person
encounters because of the compact size and maneuverability. <div><br></div><div>This research is to develop a radar system that is able to operate in open
public areas to detect and track unmanned vehicles. It is not capable using existing
radar systems such as for navigation, aviation, national defense, air traffic control,
or weather forecasting to monitor and scan public places because of large volume,
high operation cost, and danger to human health of the radar systems. For example,
if electromagnetic fields emitted from high-power radar penetrate exposed skin
surface or eyes, the energy from the electromagnetic fields can cause skin burns, eye
cataracts, or more (Zamanian & Hardiman, 2005). Therefore, a radar system that
can perform at the public place is necessary for monitoring and surveillance the area. <div><br></div><div>The hardware of this proposed radar system is composed of three parts: 1)
radio frequency transmission and receiver part which we will call RF part; 2)
transmitting radio frequency control and amplifying reflected signal part which we
will call electric part; and 3) data collection, data processing, and visualization part
which we will call post-processing part. A transmitting radio frequency control and
an amplifying reflected signal part are based on a research performed at a lecture
and labs designed by researchers at Massachusetts Institute of Technology (MIT)
Lincoln Lab, Charvat et al. (2012) and another lecture and labs designed by a
professor at University of California at Davis, Liu (2013). The radar system
designed at University of California at Davis is based on the system designed at
MIT Lincoln Lab that proposed a design of a small, low cost, and low power
consuming radar. The low power radar proposed by MIT Lincoln Lab is suitable to
operate in any public places without any restrictions for human health because of it
low power transmission, however surveillance area is relatively short and limited. To
expand monitoring area with this proposed low power radar system, the transmit
power of the radar system proposed in this study is enhanced comparing to the
radar proposed by MIT Lincoln Lab. Additionally, the radar system is designed and
fabricated on printed circuit boards (PCBs) to make the system compact and easy
to access for use of various purposed of research fields. For instance, the radar
system can be utilized for mapping, localization, or imaging. <div><br></div><div>The first part of post-processing is data collection. The raw data received
and amplified through the electric part in the hardware is collected through a
compact computer, a Raspberry Pi 3, that is directly connected to the radar. The
data collected every second and the collected data is transferred to the
post-processing devices, which is a laptop computer in this research. The
post-processing device processes data to estimate range of the object, applies filters
for tracking, and visualizes the results. In the study, a variant of the Advanced
Message Queuing Protocol (AMQP) called RabbitMQ, also called as RMQ
(Richardson, 2012; Videla & Williams, 2012) is utilized for real-time data transfer between the Raspberry Pi 3 and a post-processing device. Because each of the data
collection, post-processing scripts, and visualization processing have to be
performed continuously and sequentially, the RMQ has been used for data exchange
between the processes to assist parallel data collection and processing. The
processed data show an estimated distance of the object from the radar system in
real-time, so that the system can support to monitor a certain area in a remote
place if the two distant places are connected through a network.<div><br></div><div>This proposed radar system performed successfully to detect and track an
object that was in the sight of the radar. Although further study to improve the
system is required, the system will be highly suitable and applicable for research
areas requiring sensors for exploration, monitoring, or surveillance because of its
accessibility and flexibility. Users who will adopt this radar system for research
purposes can develop their own applications that match their research environment
for example to support robots for obstacle avoidance or localization and mapping.<br><div><div><div>
</div>
</div>
</div></div></div></div></div>
|
12 |
Object Detection with Two-stream Convolutional Networks and Scene Geometry InformationWang, Binghao 06 March 2019 (has links)
With the emergence of Convolutional Neural Network (CNN) models, precision of image classification tasks has been improved significantly over these years. Regional CNN (RCNN) model is proposed to solve object detection tasks with a combination of Region Proposal Network and CNN. This model improves the detection accuracy but suffer from slow inference speed because of its multi-stage structure. The Single Stage Detection (SSD) network is later proposed to further improve the object detection benchmark in terms of accuracy and speed. However, SSD model still suffers from high miss rate on small targets since datasets are usually dominated by medium and large sized objects, which don’t share the same features with small ones.
On the other hand, geometric analysis on dataset images can provide additional information before model training. In this thesis, we propose several SSD-based models with adjusted parameters on feature extraction layers by using geometric analysis on KITTI and Caltech Pedestrian datasets. This analysis extends SSD’s capability on small objects detection. To further improve detection accuracy, we propose a two-stream network, which uses one stream to detect medium to large objects, and another stream specifically for small objects. This two-stream model achieves competitive performance comparing to other algorithms on KITTI and Caltech Pedestrian benchmark. Those results are shown and analysed in this thesis as well.
|
13 |
Visual servoing for mobile robots navigation with collision avoidance and field-of-view constraints / Asservissement visuel pour la navigation de robots mobiles avec évitement d'obstacle et contraintes liées au champ de visionFu, Wenhao 18 April 2014 (has links)
Cette thèse porte sur le problème de la navigation basée sur la vision pour les robots mobiles dans les environnements intérieurs. Plus récemment, de nombreux travaux ont été réalisés pour résoudre la navigation à l'aide d'un chemin visuel, à savoir la navigation basée sur l'apparence. Cependant, en utilisant ce schéma, le mouvement du robot est limité au chemin visuel d'entrainement. Le risque de collision pendant le processus de navigation peut faire écarter le robot de la trajectoire visuelle courante, pour laquelle les repères visuels peuvent être perdus. Dans l'état de nos connaissances, les travaux envisagent rarement l'évitement des collisions et la perte de repère dans le cadre de la navigation basée sur l'apparence. Nous présentons un cadre mobile de navigation pour le robot afin de renforcer la capacité de la méthode basée sur l'apparence, notamment en cas d'évitement de collision et de contraintes de champ de vision. Notre cadre introduit plusieurs contributions techniques. Tout d'abord, les contraintes de mouvement sont considérés dans la détection de repère visuel pour améliorer la performance de détection. Ensuite, nous modélisons l'obstacle en utilisant B-Spline. La représentation de B-Spline n'a pas de régions accidentées et peut générer un mouvement fluide pour la tâche d'évitement de collision. En outre, nous proposons une stratégie de contrôle basée sur la vision, qui peut gérer la perte complète de la cible. Enfin, nous utilisons l'image sphérique pour traiter le cas des projections d'ambiguité et d'infini dus à la projection en perspective. Les véritables expériences démontrent la faisabilité et l'efficacité de notre cadre et de nos méthodes. / This thesis is concerned with the problem of vision-based navigation for mobile robots in indoor environments. Many works have been carried out to solve the navigation using a visual path, namely appearance-based navigation. However, using this scheme, the robot motion is limited to the trained visual path. The potential collision during the navigation process can make robot deviate from the current visual path, in which the visual landmarks can be lost in the current field of view. To the best of our knowledge, seldom works consider collision avoidance and landmark loss in the framework of appearance-based navigation. We outline a mobile robot navigation framework in order to enhance the capability of appearance-based method, especially in case of collision avoidance and field-of-view constraints. Our framework introduces several technical contributions. First of all, the motion constraints are considered into the visual landmark detection to improve the detection performance. Next then, we model the obstacle boundary using B-Spline. The B-Spline representation has no accidented regions and can generate a smooth motion for the collision avoidance task. Additionally, we propose a vision-based control strategy, which can deal with the complete target loss. Finally, we use spherical image to handle the case of ambiguity and infinity projections due to perspective projection. The real experiments demonstrate the feasability and the effectiveness of our framework and methods.
|
14 |
Object Detection Using Multiple Level AnnotationsXu, Mengmeng 04 1900 (has links)
Object detection is a fundamental problem in computer vision. Impressive results have been achieved on large-scale detection benchmarks by fully-supervised object detection (FSOD) methods. However, FSOD approaches require tremendous instance-level annotations, which are time-consuming to collect. In contrast, weakly supervised object detection (WSOD) exploits easily-collected image-level labels while it suffers from relatively inferior detection performance.
This thesis studies hybrid learning methods on the object detection problems. We intend to train an object detector from a dataset where both instance-level and image-level labels are employed. Extensive experiments on the challenging PASCAL VOC 2007 and 2012 benchmarks strongly demonstrate the effectiveness of our method, which gives a trade-off between collecting fewer annotations and building a more accurate object detector. Our method is also a strong baseline bridging the wide gap between FSOD and WSOD performances.
Based on the hybrid learning framework, we further study the problem of object detection from a novel perspective in which the annotation budget constraints are taken into consideration. When provided with a fixed budget, we propose a strategy for building a diverse and informative dataset that can be used to optimally train a robust detector. We investigate both optimization and learning-based methods to sample which images to annotate and which level of annotations (strongly or weakly supervised) to annotate them with.
By combining an optimal image/annotation selection scheme with the hybrid supervised learning, we show that one can achieve the performance of a strongly supervised detector on PASCAL-VOC 2007 while saving 12:8% of its original annotation budget. Furthermore, when 100% of the budget is used, it surpasses this performance by 2:0 mAP percentage points.
|
15 |
A High-performance Architecture for Training Viola-Jones Object DetectorsLo, Charles 20 November 2012 (has links)
The object detection framework developed by Viola and Jones has become very popular due to its high quality and detection speed. However, the complexity of the computation required to train a detector makes it difficult to develop and test potential improvements to this algorithm or train detectors in the field.
In this thesis, a configurable, high-performance FPGA architecture is presented to accelerate this training process. The architecture, structured as a systolic array of pipelined compute engines, is constructed to provide high throughput and make efficient use of the available external memory bandwidth. Extensions to the Viola-Jones detection framework are implemented to demonstrate the flexibility of the architecture. The design is implemented on a Xilinx ML605 development platform running at 200~MHz and obtains a 15-fold speed-up over a multi-threaded OpenCV implementation running on a high-end processor.
|
16 |
A High-performance Architecture for Training Viola-Jones Object DetectorsLo, Charles 20 November 2012 (has links)
The object detection framework developed by Viola and Jones has become very popular due to its high quality and detection speed. However, the complexity of the computation required to train a detector makes it difficult to develop and test potential improvements to this algorithm or train detectors in the field.
In this thesis, a configurable, high-performance FPGA architecture is presented to accelerate this training process. The architecture, structured as a systolic array of pipelined compute engines, is constructed to provide high throughput and make efficient use of the available external memory bandwidth. Extensions to the Viola-Jones detection framework are implemented to demonstrate the flexibility of the architecture. The design is implemented on a Xilinx ML605 development platform running at 200~MHz and obtains a 15-fold speed-up over a multi-threaded OpenCV implementation running on a high-end processor.
|
17 |
Moving Object Detection Based on Ordered Dithering Codebook ModelGuo, Jing-Ming, Thinh, Nguyen Van, Lee, Hua 10 1900 (has links)
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA / This paper presents an effective multi-layer background modeling method to detect moving objects by exploiting the advantage of novel distinctive features and hierarchical structure of the Codebook (CB) model. In the block-based structure, the mean-color feature within a block often does not contain sufficient texture information, causing incorrect classification especially in large block size layers. Thus, the Binary Ordered Dithering (BOD) feature becomes an important supplement to the mean RGB feature In summary, the uniqueness of this approach is the incorporation of the halftoning scheme with the codebook model for superior performance over the existing methods.
|
18 |
Reading between the lines : object localization using implicit cues from image tagsHwang, Sung Ju 10 November 2010 (has links)
Current uses of tagged images typically exploit only
the most explicit information: the link between the nouns
named and the objects present somewhere in the image. We
propose to leverage “unspoken” cues that rest within an
ordered list of image tags so as to improve object localization.
We define three novel implicit features from an image’s
tags—the relative prominence of each object as signified
by its order of mention, the scale constraints implied
by unnamed objects, and the loose spatial links hinted by
the proximity of names on the list. By learning a conditional
density over the localization parameters (position
and scale) given these cues, we show how to improve both
accuracy and efficiency when detecting the tagged objects.
We validate our approach with 25 object categories from
the PASCAL VOC and LabelMe datasets, and demonstrate
its effectiveness relative to both traditional sliding windows
as well as a visual context baseline. / text
|
19 |
A Novel Animal Detection Technique for Intelligent VehiclesZhao, Weihong 29 August 2018 (has links)
The animal-vehicle collision has been a topic of concern for years, especially in North America. To mitigate the problem, this thesis focuses on animal detection based on the onboard camera for intelligent vehicles.
In the domain of image classification and object detection, the methods of shape matching and local feature crafting have reached the technical plateau for decades. The development of Convolutional Neural Network (CNN) brings a new breakthrough. The evolution of CNN architectures has dramatically improved the performance of image classification. Effective frameworks on object detection through CNN structures are thus boosted. Notably, the family of Region-based Convolutional Neural Networks (R-CNN) perform well by combining region proposal with CNN. In this thesis, we propose to apply a new region proposal method|Maximally Stable Extremal Regions (MSER) in Fast R-CNN to construct the animal detection framework.
MSER algorithm detects stable regions which are invariant to scale, rotation and viewpoint changes. We generate regions of interest by dealing with the result of MSER algorithm in two ways: by enclosing all the pixels from the resulted pixel-list with a minimum enclosing rectangle (the PL MSER) and by fitting the resulted elliptical region to an approximate box (the EL MSER). We then preprocess the bounding boxes of PL MSER and EL MSER to improve the recall of detection. The preprocessing steps consist of filtering out undesirable regions by aspect ratio model, clustering bounding boxes to merge the overlapping regions, modifying and then enlarging the regions to cover the entire animal. We evaluate the two region proposal methods by the measurement of recall over IOU-threshold curve. The proposed MSER method can cover the expected regions better than Edge Boxes and Region Proposal Network (RPN) in Faster R-CNN. We apply the MSER region proposal method to the framework of R-CNN and Fast R-CNN. The experiments on the animal database with moose, deer, elk, and horses show that Fast R-CNN with MSER achieves better accuracy and faster speed than R-CNN with MSER. Concerning the two ways of MSER, the experimental results show that PL MSER is faster than EL MSER and EL MSER gains higher precision than PL MSER. Also, by altering the structure of network used in Fast R-CNN, we verify that network stacking more layers achieves higher accuracy and recall.
In addition, we compare the Fast R-CNN framework using MSER region proposal with the state-of-the-art Faster R-CNN by evaluating the experimental results of on our animal database. Using the same CNN structure, the proposed Fast R-CNN with MSER gains a higher average accuracy of the animal detection 0.73, compared to 0.42 of Faster R-CNN. In terms of detection quality, the proposed Fast R-CNN with MSER achieves better IoU histogram than that of Faster R-CNN.
|
20 |
Automatic Firearm Detection by Deep LearningKambhatla, Akhila 01 May 2020 (has links)
Surveillance cameras are a great support in crime investigation and proximity alarms and play a vital role in public safety. However current surveillance systems require continuous human supervision for monitoring. The primary goal of the thesis is to prevent firearm-related violence and injuries. Automatic firearm detection enhances security and safety among people. Therefore, introducing a Deep Learning Object Detection model to detect Firearms and alert the corresponding police department is the main motivation. Visual Object Detection is a fundamental recognition problem in computer vision that aims to find objects of certain target classes with precise localization of input image and assign it to the corresponding label. However, there are some challenges arising to the wide variations in shape, size, appearance, and occlusions by the weapon carrier. There are other objections to the selection of best object detection model. So, three deep learning models are selected, explained and shown the differences in detecting the firearms. The dataset in this thesis is the customized selection of different categories of firearm collection like the pistol, revolver, handgun, bullet, rifle along with human detection. The entire dataset is annotated manually in pascalvoc format. Date augmentation technique has been used to enlarge our dataset and facilitate in detecting firearms that re deformed and having occlusion properties.. To detect firearms this thesis developed and practiced unified networks like SSD and two-stage object detectors like faster RCNN. SSD is easy to understand and detect objects however it fails to detect smaller objects. Faster RCNN are efficient and able to detect smaller firearms in the scene. Each class has attained more than 90% of confidence score.
|
Page generated in 0.0969 seconds