• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 38
  • 38
  • 23
  • 17
  • 11
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Robust visual detection and tracking of complex objects : applications to space autonomous rendez-vous and proximity operations / Détection et suivi visuels robustes d'objets complexes : applications au rendezvous spatial autonome

Petit, Antoine 19 December 2013 (has links)
Dans cette thèse nous étudions le fait de localiser complètement un objet connu par vision artificielle, en utilisant une caméra monoculaire, ce qui constitue un problème majeur dans des domaines comme la robotique. Une attention particulière est ici portée sur des applications de robotique spatiale, dans le but de concevoir un système de localisation visuelle pour des opérations de rendez-vous spatial autonome. Deux composantes principales du problème sont abordées: celle de la localisation initiale de l'objet ciblé, puis celle du suivi de cet objet image par image, donnant la pose complète entre la caméra et l'objet, connaissant le modèle 3D de l'objet. Pour la détection, l'estimation de pose est basée sur une segmentation de l'objet en mouvement et sur une procédure probabiliste d'appariement et d'alignement basée contours de vues synthétiques de l'objet avec une séquence d'images initiales. Pour la phase de suivi, l'estimation de pose repose sur un algorithme de suivi basé modèle 3D, pour lequel nous proposons trois différents types de primitives visuelles, dans l'idée de décrire l'objet considéré par ses contours, sa silhouette et par un ensemble de points d'intérêts. L'intégrité du système de localisation est elle évaluée en propageant l'incertitude sur les primitives visuelles. Cette incertitude est par ailleurs utilisée au sein d'un filtre de Kalman linéaire sur les paramètres de vitesse. Des tests qualitatifs et quantitatifs ont été réalisés, sur des données synthétiques et réelles, avec notamment des conditions d'image difficiles, montrant ainsi l'efficacité et les avantages des différentes contributions proposées, et leur conformité avec un contexte de rendez vous spatial. / In this thesis, we address the issue of fully localizing a known object through computer vision, using a monocular camera, what is a central problem in robotics. A particular attention is here paid on space robotics applications, with the aims of providing a unified visual localization system for autonomous navigation purposes for space rendezvous and proximity operations. Two main challenges of the problem are tackled: initially detecting the targeted object and then tracking it frame-by-frame, providing the complete pose between the camera and the object, knowing the 3D CAD model of the object. For detection, the pose estimation process is based on the segmentation of the moving object and on an efficient probabilistic edge-based matching and alignment procedure of a set of synthetic views of the object with a sequence of initial images. For the tracking phase, pose estimation is handled through a 3D model-based tracking algorithm, for which we propose three different types of visual features, pertinently representing the object with its edges, its silhouette and with a set of interest points. The reliability of the localization process is evaluated by propagating the uncertainty from the errors of the visual features. This uncertainty besides feeds a linear Kalman filter on the camera velocity parameters. Qualitative and quantitative experiments have been performed on various synthetic and real data, with challenging imaging conditions, showing the efficiency and the benefits of the different contributions, and their compliance with space rendezvous applications.
12

TRACTS : um método para classificação de trajetórias de objetos móveis usando séries temporais

Santos, Irineu Júnior Pinheiro dos January 2011 (has links)
O crescimento do uso de sistemas de posicionamento global (GPS) e outros sistemas de localização espacial tornaram possível o rastreamento de objetos móveis, produzindo um grande volume de um novo tipo de dado, chamado trajetórias de objetos móveis. Existe, entretanto, uma forte lacuna entre a quantidade de dados extraídos destes dispositivos, dotados de sistemas GPS, e a descoberta de conhecimento que se pode inferir com estes dados. Um tipo de descoberta de conhecimento em dados de trajetórias de objetos móveis é a classificação. A classificação de trajetórias é um tema de pesquisa relativamente novo, e poucos métodos tem sido propostos até o presente momento. A maioria destes métodos foi desenvolvido para uma aplicação específica. Poucos propuseram um método mais geral, aplicável a vários domínios ou conjuntos de dados. Este trabalho apresenta um novo método de classificação que transforma as trajetórias em séries temporais, de forma a obter características mais discriminativas para a classificação. Experimentos com dados reais mostraram que o método proposto é melhor do que abordagens existentes. / The growing use of global positioning systems (GPS) and other location systems made the tracking of moving objects possible, producing a large volume of a new kind of data, called trajectories of moving objects. However, there is a large gap between the amount of data generated by these devices and the knowledge that can be inferred from these data. One type of knowledge discovery in trajectories of moving objects is classification. Trajectory classification is a relatively new research subject, and a few methods have been proposed so far. Most of these methods were developed for a specific application. Only a few have proposed a general method, applicable to multiple domains or datasets. This work presents a new classification method that transforms the trajectories into time series, in order to obtain more discriminative features for classification. Experiments with real trajectory data revealed that the proposed approach is more effective than existing approaches.
13

TRACTS : um método para classificação de trajetórias de objetos móveis usando séries temporais

Santos, Irineu Júnior Pinheiro dos January 2011 (has links)
O crescimento do uso de sistemas de posicionamento global (GPS) e outros sistemas de localização espacial tornaram possível o rastreamento de objetos móveis, produzindo um grande volume de um novo tipo de dado, chamado trajetórias de objetos móveis. Existe, entretanto, uma forte lacuna entre a quantidade de dados extraídos destes dispositivos, dotados de sistemas GPS, e a descoberta de conhecimento que se pode inferir com estes dados. Um tipo de descoberta de conhecimento em dados de trajetórias de objetos móveis é a classificação. A classificação de trajetórias é um tema de pesquisa relativamente novo, e poucos métodos tem sido propostos até o presente momento. A maioria destes métodos foi desenvolvido para uma aplicação específica. Poucos propuseram um método mais geral, aplicável a vários domínios ou conjuntos de dados. Este trabalho apresenta um novo método de classificação que transforma as trajetórias em séries temporais, de forma a obter características mais discriminativas para a classificação. Experimentos com dados reais mostraram que o método proposto é melhor do que abordagens existentes. / The growing use of global positioning systems (GPS) and other location systems made the tracking of moving objects possible, producing a large volume of a new kind of data, called trajectories of moving objects. However, there is a large gap between the amount of data generated by these devices and the knowledge that can be inferred from these data. One type of knowledge discovery in trajectories of moving objects is classification. Trajectory classification is a relatively new research subject, and a few methods have been proposed so far. Most of these methods were developed for a specific application. Only a few have proposed a general method, applicable to multiple domains or datasets. This work presents a new classification method that transforms the trajectories into time series, in order to obtain more discriminative features for classification. Experiments with real trajectory data revealed that the proposed approach is more effective than existing approaches.
14

Dynamic Data-Driven Visual Surveillance of Human Crowds via Cooperative Unmanned Vehicles

Minaeian, Sara, Minaeian, Sara January 2017 (has links)
Visual surveillance of human crowds in a dynamic environment has attracted a great amount of computer vision research efforts in recent years. Moving object detection, which conventionally includes motion segmentation and optionally, object classification, is the first major task for any visual surveillance application. After detecting the targets, estimation of their geo-locations is needed to create the same reference coordinate system for them for higher-level decision-making. Depending on the required fidelity of decision, multi-target data association may be also needed at higher levels to differentiate multiple targets in a series of frames. Applying all these vision-based algorithms to a crowd surveillance system (a major application studied in this dissertation) using a team of cooperative unmanned vehicles (UVs), introduces new challenges to the problem. Since the visual sensors move with the UVs, and thus the targets and the environment are dynamic, it adds to the complexity and uncertainty of the video processing. Moreover, the limited onboard computation resources require more efficient algorithms to be proposed. Responding to these challenges, the goal of this dissertation is to design and develop an effective and efficient visual surveillance system based on dynamic data driven application system (DDDAS) paradigm to be used by the cooperative UVs for autonomous crowd control and border patrol. The proposed visual surveillance system includes different modules: 1) a motion detection module, in which a new method for detecting multiple moving objects, based on sliding window is proposed to segment the moving foreground using the moving camera onboard the unmanned aerial vehicle (UAV); 2) a target recognition module, in which a customized method based on histogram-of-oriented-gradients is applied to classify the human targets using the onboard camera of unmanned ground vehicle (UGV); 3) a target geo-localization module, in which a new moving-landmark-based method is proposed for estimating the geo-location of the detected crowd from the UAV, while a heuristic method based on triangulation is applied for geo-locating the detected individuals via the UGV; and 4) a multi-target data association module, in which the affinity score is dynamically adjusted to comply with the changing dispersion of the detected targets over successive frames. In this dissertation, a cooperative team of one UAV and multiple UGVs with onboard visual sensors is used to take advantage of the complementary characteristics (e.g. different fidelities and view perspectives) of these UVs for crowd surveillance. The DDDAS paradigm is also applied toward these vision-based modules, where the computational and instrumentation aspects of the application system are unified for more accurate or efficient analysis according to the scenario. To illustrate and demonstrate the proposed visual surveillance system, aerial and ground video sequences from the UVs, as well as simulation models are developed, and experiments are conducted using them. The experimental results on both developed videos and literature datasets reveal the effectiveness and efficiency of the proposed modules and their promising performance in the considered crowd surveillance application.
15

Online Moving Object Visualization with Geo-Referenced Data

Zhao, Guangqiang 13 November 2015 (has links)
As a result of the rapid evolution of smart mobile devices and the wide application of satellite-based positioning devices, the moving object database (MOD) has become a hot research topic in recent years. The moving objects generate a large amount of geo-referenced data in different types, such as videos, audios, images and sensor logs. In order to better analyze and utilize the data, it is useful and necessary to visualize the data on a map. With the rise of web mapping, visualizing the moving object and geo-referenced data has never been so easy. While displaying the trajectory of a moving object is a mature technology, there is little research on visualizing both the location and data of the moving objects in a synchronized manner. This dissertation proposes a general moving object visualization model to address the above problem. This model divides the spatial data visualization systems into four categories. Another contribution of this dissertation is to provide a framework, which deals with all these visualization tasks with synchronization control in mind. This platform relies on the TerraFly web mapping system. To evaluate the universality and effectiveness of the proposed framework, this dissertation presents four visualization systems to deal with a variety of situations and different data types.
16

AATrackT: A deep learning network using attentions for tracking fast-moving and tiny objects : (A)ttention (A)ugmented - (Track)ing on (T)iny objects

Lundberg Andersson, Fredric January 2022 (has links)
Recent advances in deep learning have made it possible to visually track objects from a video sequence. Moreover, as transformers got introduced in computer vision, new state-of-the-art performances were achieved in visual tracking. However, most of these studies have used attentions to correlate the distinguishing factors between target-object and candidate-objects to localise the object throughout the video sequence. This approach is not adequate for tracking tiny objects. Also, conventional trackers in general are often not applicable to tracking extreme small objects, or objects that are moving fast. Therefore, the purpose of this study is to improve current methods to track tiny fast-moving objects, with the help of attentions. A deep neural network, named AATrackT, is built to address this gap by referring to it as a visual image segmentation problem. The proposed method is using data extracted from broadcasting videos of the sport Tennis. Moreover, to capture the global context of images, attention augmented convolutions are used as a substitute to the conventional convolution operation. Contrary to what the authors assumed, the experiment showed an indication that using attention augmented convolutions did not contribute to increasing the tracking performance. Our findings showed that the reason is mainly that the spatial resolution of the activation maps of 72x128 is too large for the attention weights to converge.
17

Use of Thermal Imagery for Robust Moving Object Detection

Bergenroth, Hannah January 2021 (has links)
This work proposes a system that utilizes both infrared and visual imagery to create a more robust object detection and classification system. The system consists of two main parts: a moving object detector and a target classifier. The first stage detects moving objects in visible and infrared spectrum using background subtraction based on Gaussian Mixture Models. Low-level fusion is performed to combine the foreground regions in the respective domain. For the second stage, a Convolutional Neural Network (CNN), pre-trained on the ImageNet dataset is used to classify the detected targets into one of the pre-defined classes; human and vehicle. The performance of the proposed object detector is evaluated using multiple video streams recorded in different areas and under various weather conditions, which form a broad basis for testing the suggested method. The accuracy of the classifier is evaluated from experimentally generated images from the moving object detection stage supplemented with publicly available CIFAR-10 and CIFAR-100 datasets. The low-level fusion method shows to be more effective than using either domain separately in terms of detection results. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
18

Hipokampální neuronální reprezentace pohybujícího se objektu v nové úloze vyhýbání se prostoru / Hippocampal neuronal representation of a moving object in a novel spatial avoidance task

Ahuja, Nikhil January 2021 (has links)
In real world environments, animals need to organize their behavior relative to other moving animals or objects; when hunting a predator, when migrating in groups or during various social interactions. In all of these situations, the animal needs to orient relative to another moving animal/object. To understand the role of the hippocampus in this ability we adopted a two-step approach. We developed a task that would mimic important elements of this behavior in the laboratory. The task required the rats to assess not only their distance from the moving object but also their position relative to the object. We further studied how neurons in the hippocampal CA1 subfield encode the subject, the moving object and the environment in the behavioral paradigm and how do these representations interact among themselves. In rats, we aimed to characterize spatial behaviors relative to moving objects and to explore the cognitive mechanisms controlling these behaviors. Three groups of animals were trained to avoid a mild foot-shock delivered in one of three positions: either in front, on the left side, or the right side of a moving robot. Using different variations of the task, we also probed whether avoidance was simply due to increased noise level or size of the retinal image or appearance of the robot. As the...
19

Détection de changements à partir de nuages de points de cartographie mobile / Change detection from mobile laser scanning point clouds

Xiao, Wen 12 November 2015 (has links)
Les systèmes de cartographie mobile sont de plus en plus utilisés pour la cartographie des scènes urbaines. La technologie de scan laser mobile (où le scanner est embarqué sur un véhicule) en particulier permet une cartographie précise de la voirie, la compréhension de la scène, la modélisation de façade, etc. Dans cette thèse, nous nous concentrons sur la détection de changement entre des nuages de points laser de cartographie mobile. Tout d'abord, nous étudions la détection des changements a partir de données RIEGL (scanner laser plan) pour la mise à jour de bases de données géographiques et l'identification d'objet temporaire. Nous présentons une méthode basée sur l'occupation de l'espace qui permet de surmonter les difficultés rencontrées par les méthodes classiques fondées sur la distance et qui ne sont pas robustes aux occultations et à l'échantillonnage anisotrope. Les zones occultées sont identifiées par la modélisation de l'état d'occupation de l'espace balayé par des faisceaux laser. Les écarts entre les points et les lignes de balayage sont interpolées en exploitant la géométrie du capteur dans laquelle la densité d'échantillonnage est isotrope. Malgré quelques limites dans le cas d'objets pénétrables comme des arbres ou des grilles, la méthode basée sur l'occupation est en mesure d'améliorer la méthode basée sur la distance point à triangle de façon significative. La méthode de détection de changement est ensuite appliquée à des données acquises par différents scanners laser et à différentes échelles temporelles afin de démontrer son large champs d'application. La géométrie d'acquisition est adaptée pour un scanner dynamique de type Velodyne. La méthode basée sur l'occupation permet alors la détection des objets en mouvement. Puisque la méthode détecte le changement en chaque point, les objets en mouvement sont détectés au niveau des points. Comme le scanner Velodyne scanne l'environnement de façon continue, les trajectoires des objets en mouvement peut être extraite. Un algorithme de détection et le suivi simultané est proposé afin de retrouver les trajectoires de piétons. Cela permet d'estimer avec précision la circulation des piétons des circulations douces dans les lieux publics. Les changements peuvent non seulement être détectés au niveau du point, mais aussi au niveau de l'objet. Ainsi nous avons pu étudier les changements entre des voitures stationnées dans les rues à différents moments de la journée afin d'en tirer des statistiques utiles aux gestionnaires du stationnement urbain. Dans ce cas, les voitures sont détectés en premier lieu, puis les voitures correspondantes sont comparées entre des passages à différents moments de la journée. Outre les changements de voitures, l'offre de stationnement et les types de voitures l'utilisant sont également des informations importantes pour la gestion du stationnement. Toutes ces informations sont extraites dans le cadre d'un apprentissage supervisé. En outre, une méthode de reconstruction de voiture sur la base d'un modèle déformable générique ajusté aux données est proposée afin de localiser précisément les voitures. Les paramètres du modèle sont également considérés comme caractéristiques de la voiture pour prendre de meilleures décisions. De plus, ces modèles géométriquement précis peuvent être utilisées à des fins de visualisation. Dans cette thèse, certains sujets liés à la détection des changements comme par exemple, suivi, la classification, et la modélisation sont étudiés et illustrés par des applications pratiques. Plus important encore, les méthodes de détection des changements sont appliquées à différentes géométries d'acquisition de données et à de multiples échelles temporelles et au travers de deux stratégies: “bottom-up” (en partant des points) et “top-down” (en partant des objets) / Mobile mapping systems are increasingly used for street environment mapping, especially mobile laser scanning technology enables precise street mapping, scene understanding, facade modelling, etc. In this research, the change detection from laser scanning point clouds is investigated. First of all, street environment change detection using RIEGL data is studied for the purpose of database updating and temporary object identification. An occupancy-based method is presented to overcome the challenges encountered by the conventional distance-based method, such as occlusion, anisotropic sampling. Occluded areas are identified by modelling the occupancy states within the laser scanning range. The gaps between points and scan lines are interpolated under the sensor reference framework, where the sampling density is isotropic. Even there are some conflicts on penetrable objects, e.g. trees, fences, the occupancy-based method is able to enhance the point-to-triangle distance-based method. The change detection method is also applied to data acquired by different laser scanners at different temporal-scales with the intention to have wider range of applications. The local sensor reference framework is adapted to Velodyne laser scanning geometry. The occupancy-based method is implemented to detection moving objects. Since the method detects the change of each point, moving objects are detect at point level. As the Velodyne scanner constantly scans the surroundings, the trajectories of moving objects can be detected. A simultaneous detection and tracking algorithm is proposed to recover the pedestrian trajectories in order to accurately estimate the traffic flow of pedestrian in public places. Changes can be detected not only at point level, but also at object level. The changes of cars parking on street sides at different times are detected to help regulate on-street car parking since the parking duration is limited. In this case, cars are detected in the first place, then they are compared with corresponding ones. Apart from car changes, parking positions and car types are also important information for parking management. All the processes are solved in a supervised learning framework. Furthermore, a model-based car reconstruction method is proposed to precisely locate cars. The model parameters are also treated as car features for better decision making. Moreover, the geometrically accurate models can be used for visualization purposes. Under the theme of change detection, related topics, e.g. tracking, classification, modelling, are also studied for the reason of practical applications. More importantly, the change detection methods are applied to different data acquisition geometries at multiple temporal-scales. Both bottom-up (point-based) and top-down (object-based) change detection strategies are investigated
20

Multiple Hypothesis Tracking For Multiple Visual Targets

Turker, Burcu 01 April 2010 (has links) (PDF)
Visual target tracking problem consists of two topics: Obtaining targets from camera measurements and target tracking. Even though it has been studied for more than 30 years, there are still some problems not completely solved. Especially in the case of multiple targets, association of measurements to targets, creation of new targets and deletion of old ones are among those. What is more, it is very important to deal with the occlusion and crossing targets problems suitably. We believe that a slightly modified version of multiple hypothesis tracking can successfully deal with most of the aforementioned problems with sufficient success. Distance, track size, track color, gate size and track history are used as parameters to evaluate the hypotheses generated for measurement to track association problem whereas size and color are used as parameters for occlusion problem. The overall tracker has been fine tuned over some scenarios and it has been observed that it performs well over the testing scenarios as well. Furthermore the performance of the tracker is analyzed according to those parameters in both association and occlusion handling situations.

Page generated in 0.0543 seconds