• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 122
  • 22
  • 13
  • 12
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 215
  • 215
  • 85
  • 68
  • 56
  • 52
  • 39
  • 35
  • 33
  • 33
  • 28
  • 27
  • 26
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

An FPGA-based Target Acquisition System

Marschner, Alexander R. 09 January 2008 (has links)
This work describes the development of an image processing algorithm, the implementation of that algorithm as both a strictly hardware design and as a multi-core software design, and the side-by-side comparison of the two implementations. In the course of creating the multi-core software design, several improvements are made to the OpenFire soft core micro-processor that is used to create the multi-core network. The hardware and multi-core software implementations of the image processing algorithm are compared side-by-side in an FPGA-based test platform. Results show that although the strictly hardware implementation leads in terms of lower power consumption and very low area consumption, modification of and programming for the multi-core software implementation is simpler to perform. / Master of Science
42

visual tracking and object motion prediction for intelligent vehicles / Suivi visuel et prédiction de mouvement des objets pour véhicules intelligents

Yang, Tao 02 May 2019 (has links)
Le suivi d’objets et la prédiction de mouvement sont des aspects importants pour les véhicules autonomes. Tout d'abord, nous avons développé une méthode de suivi mono-objet en utilisant le compressive tracking, afin de corriger le suivi à base de flux optique et d’arriver ainsi à un compromis entre performance et vitesse de traitement. Compte tenu de l'efficacité de l'extraction de caractéristiques comprimées (compressive features), nous avons appliqué cette méthode de suivi au cas multi-objets pour améliorer les performances sans trop ralentir la vitesse de traitement. Deuxièmement, nous avons amélioré la méthode de suivi mono-objet basée sur DCF en utilisant des caractéristiques provenant d’un CNN multicouches, une analyse de fiabilité spatiale (via un masque d'objet) ainsi qu’une stratégie conditionnelle de mise à jour de modèle. Ensuite, nous avons appliqué la méthode améliorée au cas du suivi multi-objets. Les VGGNet-19 et DCFNet pré-entraînés sont testés respectivement en tant qu’extracteurs de caractéristiques. Le modèle discriminant réalisé par DCF est pris en compte dans l’étape d'association des données. Troisièmement, deux modèles LSTM (seq2seq et seq2dense) pour la prédiction de mouvement des véhicules et piétons dans le système de référence de la caméra sont proposés. En se basant sur des données visuelles et un nuage de points 3D (LiDAR), un système de suivi multi-objets basé sur un filtre de Kalman avec un détecteur 3D sont utilisés pour générer les trajectoires des objets à tester. Les modèles proposées et le modèle de régression polynomiale, considéré comme méthode de référence, sont comparés et évalués. / Object tracking and motion prediction are important for autonomous vehicles and can be applied in many other fields. First, we design a single object tracker using compressive tracking to correct the optical flow tracking in order to achieve a balance between performance and processing speed. Considering the efficiency of compressive feature extraction, we apply this tracker to multi-object tracking to improve the performance without slowing down too much speed. Second, we improve the DCF based single object tracker by introducing multi-layer CNN features, spatial reliability analysis (through a foreground mask) and conditionally model updating strategy. Then, we apply the DCF based CNN tracker to multi-object tracking. The pre-trained VGGNet-19 and DCFNet are tested as feature extractors respectively. The discriminative model achieved by DCF is considered for data association. Third, two proposed LSTM models (seq2seq and seq2dense) for motion prediction of vehicles and pedestrians in the camera coordinate are proposed. Based on visual data and 3D points cloud (LiDAR), a Kalman filter based multi-object tracking system with a 3D detector are used to generate the object trajectories for testing. The proposed models, and polynomial regression model, considered as baseline, are compared for evaluation.
43

Visual Detection And Tracking Of Moving Objects

Ergezer, Hamza 01 November 2007 (has links) (PDF)
In this study, primary steps of a visual surveillance system are presented: moving object detection and tracking of these moving objects. Background subtraction has been performed to detect the moving objects in the video, which has been taken from a static camera. Four methods, frame differencing, running (moving) average, eigenbackground subtraction and mixture of Gaussians, have been used in the background subtraction process. After background subtraction, using some additional operations, such as morphological operations and connected component analysis, the objects to be tracked have been acquired. While tracking the moving objects, active contour models (snakes) has been used as one of the approaches. In addition to this method / Kalman tracker and mean-shift tracker are other approaches which have been utilized. A new approach has been proposed for the problem of tracking multiple targets. We have implemented this method for single and multiple camera configurations. Multiple cameras have been used to augment the measurements. Homography matrix has been calculated to find the correspondence between cameras. Then, measurements and tracks have been associated by the new tracking method.
44

Multiple Target Tracking Using Multiple Cameras

Yilmaz, Mehmet 01 May 2008 (has links) (PDF)
Video surveillance has long been in use to monitor security sensitive areas such as banks, department stores, crowded public places and borders. The rise in computer speed, availability of cheap large-capacity storage devices and high speed network infrastructure enabled the way for cheaper, multi sensor video surveillance systems. In this thesis, the problem of tracking multiple targets with multiple cameras has been discussed. Cameras have been located so that they have overlapping fields of vision. A dynamic background-modeling algorithm is described for segmenting moving objects from the background, which is capable of adapting to dynamic scene changes and periodic motion, such as illumination change and swaying of trees. After segmentation of foreground scene, the objects to be tracked have been acquired by morphological operations and connected component analysis. For the purpose of tracking the moving objects, an active contour model (snakes) is one of the approaches, in addition to a Kalman tracker. As the main tracking algorithm, a rule based tracker has been developed first for a single camera, and then extended to multiple cameras. Results of used and proposed methods are given in detail.
45

Unsupervised multiple object tracking on video with no ego motion / Oövervakad spårning av flera objekt på video utan egorörelse

Wu, Shuai January 2022 (has links)
Multiple-object tracking is a task within the field of computer vision. As the name stated, the task consists of tracking multiple objects in the video, an algorithm that completes such task are called trackers. Many of the existing trackers require supervision, meaning that the location and identity of each object which appears in the training data must be labeled. The procedure of generating these labels, usually through manual annotation of video material, is highly resource-consuming. On the other hand, different from well-known labeled Multiple-object tracking datasets, there exist a massive amount of unlabeled video with different objects, environments, and video specifications. Using such unlabeled video can therefore contribute to cheaper and more diverse datasets. There have been numerous attempts on unsupervised object tracking, but most rely on evaluating the tracker performance on a labeled dataset. The reason behind this is the lack of an evaluation method for unlabeled datasets. This project explores unsupervised pedestrian tracking on video taken from a stationary camera over a long duration. On top of a simple baseline tracker, two methods are proposed to extend the baseline to increase its performance. We then propose an evaluation method that works for unlabeled video, which we use to evaluate the proposed methods. The evaluation method consists of the trajectory completion rate and the number of ID switches. The trajectory completion rate is a novel metric proposed for pedestrian tracking. Pedestrians generally enter and exit the scene for video taken by a stationary camera in specific locations. We define a complete trajectory as a trajectory that goes from one area to another. The completion rate is calculated by the number of complete trajectories over all trajectories. Results showed that the two proposed methods had increased the trajectory completion rate on top of the original baseline performance. Moreover, both proposed methods did so without significantly increasing the number of ID switches. / Spårning av flera objekt är en uppgift inom området datorseende. Som namnet angav består uppgiften av att spåra flera objekt i videon, en algoritm som slutför en sådan uppgift kallas trackers. Många av de befintliga spårarna kräver övervakning, vilket innebär att platsen och identiteten för varje objekt som visas i träningsdata måste märkas. Proceduren för att generera dessa etiketter, vanligtvis genom manuell anteckning av videomaterial, är mycket resurskrävande. Å andra sidan, till skillnad från välkända märkta uppsättningar för spårning av flera objekt, finns det en enorm mängd omärkt video med olika objekt, miljöer och videospecifikationer. Att använda sådan omärkt video kan därför bidra till billigare och mer varierande datauppsättningar. Det har gjorts många försök med oövervakad objektspårning, men de flesta förlitar sig på att utvärdera spårningsprestandan på en märkt dataset. Anledningen till detta är avsaknaden av en utvärderingsmetod för omärkta datamängder. Detta projekt utforskar oövervakad fotgängarspårning på video som tagits från en stillastående kamera under lång tid. Utöver en enkel baslinjespårare föreslås två metoder för att utöka baslinjen för att öka dess prestanda. Vi föreslår sedan en utvärderingsmetod som fungerar för omärkt video, som vi använder för att utvärdera de föreslagna metoderna. Utvärderingsmetoden består av banans slutförandegrad och antalet ID-växlar. Banans slutförandegrad är ett nytt mått som föreslås för spårning av fotgängare. Fotgängare går vanligtvis in och lämnar scenen för video tagna med en stillastående kamera på specifika platser. Vi definierar en komplett bana som en bana som går från ett område till ett annat. Färdigställandegraden beräknas av antalet kompletta banor över alla banor. Resultaten visade att de två föreslagna metoderna hade ökat graden av fullbordande av banan utöver den ursprungliga baslinjeprestandan. Dessutom gjorde båda de föreslagna metoderna det utan att nämnvärt öka antalet ID-växlar.
46

Learning Object Properties From Manipulation for Manipulation

Güler, Püren January 2017 (has links)
The world contains objects with various properties - rigid, granular, liquid, elastic or plastic. As humans, while interacting with the objects, we plan our manipulation by considering their properties. For instance, while holding a rigid object such as a brick, we adapt our grasp based on its centre of mass not to drop it. On the other hand while manipulating a deformable object, we may consider additional properties to the centre of mass such elasticity, brittleness etc. for grasp stability. Therefore, knowing object properties is an integral part of skilled manipulation of objects.  For manipulating objects skillfully, robots should be able to predict the object properties as humans do. To predict the properties, interactions with objects are essential. These interactions give rise distinct sensory signals that contains information about the object properties. The signals coming from a single sensory modality may give ambiguous information or noisy measurements. Hence, by integrating multi-sensory modalities (vision, touch, audio or proprioceptive), a manipulated object can be observed from different aspects and this can decrease the uncertainty in the observed properties. By analyzing the perceived sensory signals, a robot reasons about the object properties and adjusts its manipulation based on this information. During this adjustment, the robot can make use of a simulation model to predict the object behavior to plan the next action. For instance, if an object is assumed to be rigid before interaction and exhibit deformable behavior after interaction, an internal simulation model can be used to predict the load force exerted on the object, so that appropriate manipulation can be planned in the next action. Thus, learning about object properties can be defined as an active procedure. The robot explores the object properties actively and purposefully by interacting with the object, and adjusting its manipulation based on the sensory information and predicted object behavior through an internal simulation model. This thesis investigates the necessary mechanisms that we mentioned above to learn object properties: (i) multi-sensory information, (ii) simulation and (iii) active exploration. In particular, we investigate these three mechanisms that represent different and complementary ways of extracting a certain object property, the deformability of objects. Firstly, we investigate the feasibility of using visual and/or tactile data to classify the content of a container based on the deformation observed when a robotic hand squeezes and deforms the container. According to our result, both visual and tactile sensory data individually give high accuracy rates while classifying the content type based on the deformation. Next, we investigate the usage of a simulation model to estimate the object deformability that is revealed through a manipulation. The proposed method identify accurately the deformability of the test objects in synthetic and real-world data. Finally, we investigate the integration of the deformation simulation in a robotic active perception framework to extract the heterogenous deformability properties of an environment through physical interactions. In the experiments that we apply on real-world objects, we illustrate that the active perception framework can map the heterogeneous deformability properties of a surface. / <p>QC 20170517</p>
47

Bayesovske modely očných pohybov / Bayesian models of eye movements

Lux, Erik January 2014 (has links)
Attention allows us to monitor objects or regions of visual space and extract information from them to use for report or storage. Classical theories of attention assumed a single focus of selection but many everyday activities, such as playing video games, suggest otherwise. Nonetheless, the underlying mechanism which can explain the ability to divide attention has not been well established. Numerous attempts have been made in order to clarify divided attention, including analytical strategies as well as methods working with visual phenomena, even more sophisticated predictors incorporating information about past selection decisions. Virtually all the attempts approach this problem by constructing a simplified model of attention. In this study, we develop a version of the existing Bayesian framework to propose such models, and evaluate their ability to generate eye movement trajectories. For the comparison of models, we use the eye movement trajectories generated by several analytical strategies. We measure the similarity between...
48

Bayesovske modely očných pohybov / Bayesian models of eye movements

Lux, Erik January 2014 (has links)
Attention allows us to monitor objects or regions of visual space and extract information from them to use for report or storage. Classical theories of attention assumed a single focus of selection but many everyday activities, such as playing video games, suggest otherwise. Nonetheless, the underlying mechanism which can explain the ability to divide attention has not been well established. Numerous attempts have been made in order to clarify divided attention, including analytical strategies as well as methods working with visual phenomena, even more sophisticated predictors incorporating information about past selection decisions. Virtually all the attempts approach this problem by constructing a simplified model of attention. In this study, we develop a version of the existing Bayesian framework to propose such models, and evaluate their ability to generate eye movement trajectories. For the comparison of models, we use the eye movement trajectories generated by several analytical strategies. We measure the...
49

Modelling eye movements during Multiple Object Tracking / Modelling eye movements during Multiple Object Tracking

Děchtěrenko, Filip January 2012 (has links)
In everyday situations people have to track several objects at once (e.g. driving or collective sports). Multiple object tracking paradigm (MOT) plausibly simulate tracking several targets in laboratory conditions. When we track targets in tasks with many other objects in scene, it becomes difficult to discriminate objects in periphery (crowding). Although tracking could be done only using attention, it is interesting question how humans plan their eye movements during tracking. In our study, we conducted a MOT experiment in which we presented participants repeatedly several trials with varied number of distractors, we recorded eye movements and we measured consistency of eye movements using Normalized scanpath saliency (NSS) metric. We created several analytical strategies employing crowding avoidance and compared them with eye data. Beside analytical models, we trained neural networks to predict eye movements in MOT trial. The performance of the proposed models and neuron networks was evaluated in a new MOT experiment. The analytical models explained variability of eye movements well (results comparable to intraindividual noise in the data); predictions based on neural networks were less successful.
50

Moving Object Detection And Tracking With Doppler LiDAR

Yuchi Ma (6632270) 11 June 2019 (has links)
Perceiving the dynamics of moving objects in complex scenarios is crucial for smart monitoring and safe navigation, thus a key enabler for intelligent supervision and autonomous driving. A variety of research has been developed to detect and track moving objects from data collected by optical sensors and/or laser scanners while most of them concentrate on certain type of objects or face the problem of lacking motion cues. In this thesis, we present a data-driven, model-free detection-based tracking approach for tracking moving objects in urban scenes from time sequential point clouds obtained via state-of-art Doppler LiDAR, which can not only collect spatial information (e.g. point clouds) but also Doppler images by using Doppler-shifted frequencies. In our approach, we first use Doppler images to detect moving points and determine the number of moving objects, which are then completely segmented via a region growing technique. The detected objects are then input to the tracking session which is based on Multiple Hypothesis Tracking (MHT) with two innovative extensions. One extension is that a new point cloud descriptor, <i>Oriented Ensemble of Shape Function (OESF)</i>, is proposed to evaluate the structure similarity when doing object-to-track association in MHT. Another extension is that speed information from Doppler images is used to predict the dynamic state of the moving objects, which is integrated into MHT to improve the estimation of dynamic state of moving objects. The proposed approach has been tested on datasets collected by a terrestrial Doppler LiDAR and a mobile Doppler LiDAR <a>separately</a>. The quantitative evaluation of detection and tracking results shows the unique advantages of the Doppler LiDAR and the effectiveness of the proposed detection and tracking approach.<br>

Page generated in 0.075 seconds