• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 2
  • 1
  • 1
  • Tagged with
  • 19
  • 19
  • 19
  • 10
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Moving Object Detection Based on Ordered Dithering Codebook Model

Guo, Jing-Ming, Thinh, Nguyen Van, Lee, Hua 10 1900 (has links)
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA / This paper presents an effective multi-layer background modeling method to detect moving objects by exploiting the advantage of novel distinctive features and hierarchical structure of the Codebook (CB) model. In the block-based structure, the mean-color feature within a block often does not contain sufficient texture information, causing incorrect classification especially in large block size layers. Thus, the Binary Ordered Dithering (BOD) feature becomes an important supplement to the mean RGB feature In summary, the uniqueness of this approach is the incorporation of the halftoning scheme with the codebook model for superior performance over the existing methods.
2

Moving object detection in urban environments

Gillsjö, David January 2012 (has links)
Successful and high precision localization is an important feature for autonomous vehicles in an urban environment. GPS solutions are not good on their own and laser, sonar and radar are often used as complementary sensors. Localization with these sensors requires the use of techniques grouped under the acronym SLAM (Simultaneous Localization And Mapping). These techniques work by comparing the current sensor inputs to either an incrementally built or known map, also adding the information to the map.Most of the SLAM techniques assume the environment to be static, which means that dynamics and clutter in the environment might cause SLAM to fail. To ob-tain a more robust algorithm, the dynamics need to be dealt with. This study seeks a solution where measurements from different points in time can be used in pairwise comparisons to detect non-static content in the mapped area. Parked cars could for example be detected at a parking lot by using measurements from several different days.The method successfully detects most non-static objects in the different test datasets from the sensor. The algorithm can be used in conjunction with Pose-SLAM to get a better localization estimate and a map for later use. This map is good for localization with SLAM or other techniques since only static objects are left in it.
3

Research and Development of DSP Based System for Tracking An Arbitrary-Shaped Object

Lin, Wei-Ting 12 July 2005 (has links)
A DSP-based system is developed in this thesis for tracking ¡§an arbitrary-shaped object¡¨. It uses CCD camera to capture images, and detects in the video sequence. When we want to track a target that we interest, we can make the target in the view of camera. If the target move, the system will lock it and extract its contour by using active contour model. After extracting contour, the system will start to track target and shows the locked image on the LCD screen. The tracking system includes three sub-systems : ¡§Moving Object Detection¡¨, ¡§Active Contour Model¡¨, and ¡§Contour Matching¡¨. From the results of experiment, it can meet the expectation and gain good performance and robustness.
4

Dynamic Data-Driven Visual Surveillance of Human Crowds via Cooperative Unmanned Vehicles

Minaeian, Sara, Minaeian, Sara January 2017 (has links)
Visual surveillance of human crowds in a dynamic environment has attracted a great amount of computer vision research efforts in recent years. Moving object detection, which conventionally includes motion segmentation and optionally, object classification, is the first major task for any visual surveillance application. After detecting the targets, estimation of their geo-locations is needed to create the same reference coordinate system for them for higher-level decision-making. Depending on the required fidelity of decision, multi-target data association may be also needed at higher levels to differentiate multiple targets in a series of frames. Applying all these vision-based algorithms to a crowd surveillance system (a major application studied in this dissertation) using a team of cooperative unmanned vehicles (UVs), introduces new challenges to the problem. Since the visual sensors move with the UVs, and thus the targets and the environment are dynamic, it adds to the complexity and uncertainty of the video processing. Moreover, the limited onboard computation resources require more efficient algorithms to be proposed. Responding to these challenges, the goal of this dissertation is to design and develop an effective and efficient visual surveillance system based on dynamic data driven application system (DDDAS) paradigm to be used by the cooperative UVs for autonomous crowd control and border patrol. The proposed visual surveillance system includes different modules: 1) a motion detection module, in which a new method for detecting multiple moving objects, based on sliding window is proposed to segment the moving foreground using the moving camera onboard the unmanned aerial vehicle (UAV); 2) a target recognition module, in which a customized method based on histogram-of-oriented-gradients is applied to classify the human targets using the onboard camera of unmanned ground vehicle (UGV); 3) a target geo-localization module, in which a new moving-landmark-based method is proposed for estimating the geo-location of the detected crowd from the UAV, while a heuristic method based on triangulation is applied for geo-locating the detected individuals via the UGV; and 4) a multi-target data association module, in which the affinity score is dynamically adjusted to comply with the changing dispersion of the detected targets over successive frames. In this dissertation, a cooperative team of one UAV and multiple UGVs with onboard visual sensors is used to take advantage of the complementary characteristics (e.g. different fidelities and view perspectives) of these UVs for crowd surveillance. The DDDAS paradigm is also applied toward these vision-based modules, where the computational and instrumentation aspects of the application system are unified for more accurate or efficient analysis according to the scenario. To illustrate and demonstrate the proposed visual surveillance system, aerial and ground video sequences from the UVs, as well as simulation models are developed, and experiments are conducted using them. The experimental results on both developed videos and literature datasets reveal the effectiveness and efficiency of the proposed modules and their promising performance in the considered crowd surveillance application.
5

Use of Thermal Imagery for Robust Moving Object Detection

Bergenroth, Hannah January 2021 (has links)
This work proposes a system that utilizes both infrared and visual imagery to create a more robust object detection and classification system. The system consists of two main parts: a moving object detector and a target classifier. The first stage detects moving objects in visible and infrared spectrum using background subtraction based on Gaussian Mixture Models. Low-level fusion is performed to combine the foreground regions in the respective domain. For the second stage, a Convolutional Neural Network (CNN), pre-trained on the ImageNet dataset is used to classify the detected targets into one of the pre-defined classes; human and vehicle. The performance of the proposed object detector is evaluated using multiple video streams recorded in different areas and under various weather conditions, which form a broad basis for testing the suggested method. The accuracy of the classifier is evaluated from experimentally generated images from the moving object detection stage supplemented with publicly available CIFAR-10 and CIFAR-100 datasets. The low-level fusion method shows to be more effective than using either domain separately in terms of detection results. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
6

Détection de changements à partir de nuages de points de cartographie mobile / Change detection from mobile laser scanning point clouds

Xiao, Wen 12 November 2015 (has links)
Les systèmes de cartographie mobile sont de plus en plus utilisés pour la cartographie des scènes urbaines. La technologie de scan laser mobile (où le scanner est embarqué sur un véhicule) en particulier permet une cartographie précise de la voirie, la compréhension de la scène, la modélisation de façade, etc. Dans cette thèse, nous nous concentrons sur la détection de changement entre des nuages de points laser de cartographie mobile. Tout d'abord, nous étudions la détection des changements a partir de données RIEGL (scanner laser plan) pour la mise à jour de bases de données géographiques et l'identification d'objet temporaire. Nous présentons une méthode basée sur l'occupation de l'espace qui permet de surmonter les difficultés rencontrées par les méthodes classiques fondées sur la distance et qui ne sont pas robustes aux occultations et à l'échantillonnage anisotrope. Les zones occultées sont identifiées par la modélisation de l'état d'occupation de l'espace balayé par des faisceaux laser. Les écarts entre les points et les lignes de balayage sont interpolées en exploitant la géométrie du capteur dans laquelle la densité d'échantillonnage est isotrope. Malgré quelques limites dans le cas d'objets pénétrables comme des arbres ou des grilles, la méthode basée sur l'occupation est en mesure d'améliorer la méthode basée sur la distance point à triangle de façon significative. La méthode de détection de changement est ensuite appliquée à des données acquises par différents scanners laser et à différentes échelles temporelles afin de démontrer son large champs d'application. La géométrie d'acquisition est adaptée pour un scanner dynamique de type Velodyne. La méthode basée sur l'occupation permet alors la détection des objets en mouvement. Puisque la méthode détecte le changement en chaque point, les objets en mouvement sont détectés au niveau des points. Comme le scanner Velodyne scanne l'environnement de façon continue, les trajectoires des objets en mouvement peut être extraite. Un algorithme de détection et le suivi simultané est proposé afin de retrouver les trajectoires de piétons. Cela permet d'estimer avec précision la circulation des piétons des circulations douces dans les lieux publics. Les changements peuvent non seulement être détectés au niveau du point, mais aussi au niveau de l'objet. Ainsi nous avons pu étudier les changements entre des voitures stationnées dans les rues à différents moments de la journée afin d'en tirer des statistiques utiles aux gestionnaires du stationnement urbain. Dans ce cas, les voitures sont détectés en premier lieu, puis les voitures correspondantes sont comparées entre des passages à différents moments de la journée. Outre les changements de voitures, l'offre de stationnement et les types de voitures l'utilisant sont également des informations importantes pour la gestion du stationnement. Toutes ces informations sont extraites dans le cadre d'un apprentissage supervisé. En outre, une méthode de reconstruction de voiture sur la base d'un modèle déformable générique ajusté aux données est proposée afin de localiser précisément les voitures. Les paramètres du modèle sont également considérés comme caractéristiques de la voiture pour prendre de meilleures décisions. De plus, ces modèles géométriquement précis peuvent être utilisées à des fins de visualisation. Dans cette thèse, certains sujets liés à la détection des changements comme par exemple, suivi, la classification, et la modélisation sont étudiés et illustrés par des applications pratiques. Plus important encore, les méthodes de détection des changements sont appliquées à différentes géométries d'acquisition de données et à de multiples échelles temporelles et au travers de deux stratégies: “bottom-up” (en partant des points) et “top-down” (en partant des objets) / Mobile mapping systems are increasingly used for street environment mapping, especially mobile laser scanning technology enables precise street mapping, scene understanding, facade modelling, etc. In this research, the change detection from laser scanning point clouds is investigated. First of all, street environment change detection using RIEGL data is studied for the purpose of database updating and temporary object identification. An occupancy-based method is presented to overcome the challenges encountered by the conventional distance-based method, such as occlusion, anisotropic sampling. Occluded areas are identified by modelling the occupancy states within the laser scanning range. The gaps between points and scan lines are interpolated under the sensor reference framework, where the sampling density is isotropic. Even there are some conflicts on penetrable objects, e.g. trees, fences, the occupancy-based method is able to enhance the point-to-triangle distance-based method. The change detection method is also applied to data acquired by different laser scanners at different temporal-scales with the intention to have wider range of applications. The local sensor reference framework is adapted to Velodyne laser scanning geometry. The occupancy-based method is implemented to detection moving objects. Since the method detects the change of each point, moving objects are detect at point level. As the Velodyne scanner constantly scans the surroundings, the trajectories of moving objects can be detected. A simultaneous detection and tracking algorithm is proposed to recover the pedestrian trajectories in order to accurately estimate the traffic flow of pedestrian in public places. Changes can be detected not only at point level, but also at object level. The changes of cars parking on street sides at different times are detected to help regulate on-street car parking since the parking duration is limited. In this case, cars are detected in the first place, then they are compared with corresponding ones. Apart from car changes, parking positions and car types are also important information for parking management. All the processes are solved in a supervised learning framework. Furthermore, a model-based car reconstruction method is proposed to precisely locate cars. The model parameters are also treated as car features for better decision making. Moreover, the geometrically accurate models can be used for visualization purposes. Under the theme of change detection, related topics, e.g. tracking, classification, modelling, are also studied for the reason of practical applications. More importantly, the change detection methods are applied to different data acquisition geometries at multiple temporal-scales. Both bottom-up (point-based) and top-down (object-based) change detection strategies are investigated
7

Multiple Hypothesis Tracking For Multiple Visual Targets

Turker, Burcu 01 April 2010 (has links) (PDF)
Visual target tracking problem consists of two topics: Obtaining targets from camera measurements and target tracking. Even though it has been studied for more than 30 years, there are still some problems not completely solved. Especially in the case of multiple targets, association of measurements to targets, creation of new targets and deletion of old ones are among those. What is more, it is very important to deal with the occlusion and crossing targets problems suitably. We believe that a slightly modified version of multiple hypothesis tracking can successfully deal with most of the aforementioned problems with sufficient success. Distance, track size, track color, gate size and track history are used as parameters to evaluate the hypotheses generated for measurement to track association problem whereas size and color are used as parameters for occlusion problem. The overall tracker has been fine tuned over some scenarios and it has been observed that it performs well over the testing scenarios as well. Furthermore the performance of the tracker is analyzed according to those parameters in both association and occlusion handling situations.
8

Detecting And Tracking Moving Objects With An Active Camera In Real Time

Karakas, Samet 01 September 2011 (has links) (PDF)
Moving object detection techniques can be divided into two categories based on the type of the camera which is either static or active. Methods of static cameras can detect moving objects according to the variable regions on the video frame. However, the same method is not suitable for active cameras. The task of moving object detection for active cameras generally needs more complex algorithms and unique solutions. The aim of this thesis work is real time detection and tracking of moving objects with an active camera. For this purpose, feature based algorithms are implemented due to the computational efficiency of these kinds of algorithms and SURF (Speeded Up Robust Features) is mainly used for these algorithms. An algorithm is developed in C++ environment and OpenCV library is frequently used. The developed algorithm is capable of detecting and tracking moving objects by using a PTZ (Pan-Tilt-Zoom) camera at a frame rate of approximately 5 fps and with a resolution of 640x480.
9

Vision-assisted Object Tracking

Ozertem, Kemal Arda 01 February 2012 (has links) (PDF)
In this thesis, a video tracking method is proposed that is based on both computer vision and estimation theory. For this purpose, the overall study is partitioned into four related subproblems. The first part is moving object detection / for moving object detection, two different background modeling methods are developed. The second part is feature extraction and estimation of optical flow between video frames. As the feature extraction method, a well-known corner detector algorithm is employed and this extraction is applied only at the moving regions in the scene. For the feature points, the optical flow vectors are calculated by using an improved version of Kanade Lucas Tracker. The resulting optical flow field between consecutive frames is used directly in proposed tracking method. In the third part, a particle filter structure is build to provide tracking process. However, the particle filter is improved by adding optical flow data to the state equation as a correction term. In the last part of the study, the performance of the proposed approach is compared against standard implementations particle filter based trackers. Based on the simulation results in this study, it could be argued that insertion of vision-based optical flow estimation to tracking formulation improves the overall performance.
10

Moving Object Identification And Event Recognition In Video Surveillamce Systems

Orten, Burkay Birant 01 August 2005 (has links) (PDF)
This thesis is devoted to the problems of defining and developing the basic building blocks of an automated surveillance system. As its initial step, a background-modeling algorithm is described for segmenting moving objects from the background, which is capable of adapting to dynamic scene conditions, as well as determining shadows of the moving objects. After obtaining binary silhouettes for targets, object association between consecutive frames is achieved by a hypothesis-based tracking method. Both of these tasks provide basic information for higher-level processing, such as activity analysis and object identification. In order to recognize the nature of an event occurring in a scene, hidden Markov models (HMM) are utilized. For this aim, object trajectories, which are obtained through a successful track, are written as a sequence of flow vectors that capture the details of instantaneous velocity and location information. HMMs are trained with sequences obtained from usual motion patterns and abnormality is detected by measuring the distance to these models. Finally, MPEG-7 visual descriptors are utilized in a regional manner for object identification. Color structure and homogeneous texture parameters of the independently moving objects are extracted and classifiers, such as Support Vector Machine (SVM) and Bayesian plug-in (Mahalanobis distance), are utilized to test the performance of the proposed person identification mechanism. The simulation results with all the above building blocks give promising results, indicating the possibility of constructing a fully automated surveillance system for the future.

Page generated in 0.124 seconds