• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 125
  • 22
  • 13
  • 12
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 218
  • 218
  • 88
  • 70
  • 58
  • 53
  • 39
  • 35
  • 34
  • 33
  • 28
  • 27
  • 26
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

3D Video Capture of a Moving Object in a Wide Area Using Active Cameras / 能動カメラ群を用いた広域移動対象の3次元ビデオ撮影

Yamaguchi, Tatsuhisa 24 September 2013 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第17919号 / 情博第501号 / 新制||情||89(附属図書館) / 30739 / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 松山 隆司, 教授 美濃 導彦, 教授 中村 裕一 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
22

Adaptive Fusion Approach for Multiple Feature Object Tracking

Krieger, Evan January 2018 (has links)
No description available.
23

Fully Transparent Computer Vision Framework for Ship Detection and Tracking in Satellite Imagery

Gottweis, Jason T. January 2018 (has links)
No description available.
24

Real-Time Multiple Object Tracking : A Study on the Importance of Speed / Identifiering av rörliga objekt i realtid

Murray, Samuel January 2017 (has links)
Multiple object tracking consists of detecting and identifying objects in video. In some applications, such as robotics and surveillance, it is desired that the tracking is performed in real-time. This poses a challenge in that it requires the algorithm to run as fast as the frame-rate of the video. Today's top performing tracking methods run at only a few frames per second, and can thus not be used in real-time. Further, when determining the speed of the tracker, it is common to not include the time it takes to detect objects. We argue that this way of measuring speed is not relevant for robotics or embedded systems, where the detecting of objects is done on the same machine as the tracking. We propose that one way of running a method in real-time is to not look at every frame, but skip frames to make the video have the same frame-rate as the tracking method. However, we believe that this will lead to decreased performance. In this project, we implement a multiple object tracker, following the tracking-by-detection paradigm, as an extension of an existing method. It works by modelling the movement of objects by solving the filtering problem, and associating detections with predicted new locations in new frames using the Hungarian algorithm. Three different similarity measures are used, which use the location and shape of the bounding boxes. Compared to other trackers on the MOTChallenge leaderboard, our method, referred to as C++SORT, is the fastest non-anonymous submission, while also achieving decent score on other metrics. By running our model on the Okutama-Action dataset, sampled at different frame-rates, we show that the performance is greatly reduced when running the model - including detecting objects - in real-time. In most metrics, the score is reduced by 50%, but in certain cases as much as 90%. We argue that this indicates that other, slower methods could not be used for tracking in real-time, but that more research is required specifically on this. / För att spåra rörliga objekt i video (eng: multiple object tracking) krävs att man lokaliserar och identifierar dem. I vissa tillämpningar, såsom robotik och övervakning, kan det krävas att detta görs i realtid, vilket kan vara svårt i praktiken, då det förutsätter att algoritmen kan köras lika fort som videons bildfrekvensen. De kraftfullaste algoritmerna idag kan bara analysera ett fåtal bildrutor per sekund, och lämpar sig därför inte för realtidsanvändning. Dessutom brukar tiden per bildruta inte inkludera den tid det tar att lokalisera objekt, när hastigheten av en algoritm presenteras. Vi anser att det sättet att beräkna hastigheten inte är lämpligt inom robotik eller inbyggda system, där lokaliseringen och identifiering av objekt sker på samma maskin. Många algoritmer kan köras i realtid genom att hoppa över det antal bildrutor i videon som krävs för att bildfrekvensen ska bli densamma som algoritmens frekvens. Dock tror vi att detta leder till sämre prestanda. I det här projektet implementerar vi en algoritm för att identifiera rörliga objekt. Vår algoritm bygger på befintliga metoder inom paradigmen tracking-by-detection (ung. spårning genom detektion). Algoritmen uppskattar hastigheten hos varje objekt genom att lösa ett filtreringsproblem. Utifrån hastigheten beräknas en förväntad ny position, som kopplas till nya observationer med hjälp av Kuhn–Munkres algoritm. Tre olika likhetsmått används, som på olika sätt kombinerar positionen för och formen på objekten. Vår metod, C++SORT, är den snabbaste icke-anonyma metoden publicerad på MOTChallenge. Samtidigt presterar den bra enligt flera andra mått. Genom att testa vår algoritm på video från Okutama-Action, med varierande bildfrekvens, kan vi visa att prestandan sjunker kraftigt när hela modellen - inklusive att lokalisera objekt - körs i realtid. Prestandan enligt de flesta måtten sjunker med 50%, men i vissa fall med så mycket som 90%. Detta tyder på att andra, långsammare metoder inte kan användas i realtid, utan att mer forskning, specifikt inriktad på spårning i realtid, behövs.
25

Detection and tracking of unknown objects on the road based on sparse LiDAR data for heavy duty vehicles / Upptäckt och spårning av okända objekt på vägen baserat på glesa LiDAR-data för tunga fordon

Shilo, Albina January 2018 (has links)
Environment perception within autonomous driving aims to provide a comprehensive and accurate model of the surrounding environment based on information from sensors. For the model to be comprehensive it must provide the kinematic state of surrounding objects. The existing approaches of object detection and tracking (estimation of kinematic state) are developed for dense 3D LiDAR data from a sensor mounted on a car. However, it is a challenge to design a robust detection and tracking algorithm for sparse 3D LiDAR data. Therefore, in this thesis we propose a framework for detection and tracking of unknown objects using sparse VLP-16 LiDAR data which is mounted on a heavy duty vehicle. Experiments reveal that the proposed framework performs well detecting trucks, buses, cars, pedestrians and even smaller objects of a size bigger than 61x41x40 cm. The detection distance range depends on the size of an object such that large objects (trucks and buses) are detected within 25 m while cars and pedestrians within 18 m and 15 m correspondingly. The overall multiple objecttracking accuracy of the framework is 79%. / Miljöperception inom autonom körning syftar till att ge en heltäckande och korrekt modell av den omgivande miljön baserat på information från sensorer. För att modellen ska vara heltäckande måste den ge information om tillstånden hos omgivande objekt. Den befintliga metoden för objektidentifiering och spårning (uppskattning av kinematiskt tillstånd) utvecklas för täta 3D-LIDAR-data från en sensor monterad på en bil. Det är dock en utmaning att designa en robust detektions och spårningsalgoritm för glesa 3D-LIDAR-data. Därför föreslår vi ett ramverk för upptäckt och spårning av okända objekt med hjälp av gles VLP-16-LIDAR-data som är monterat på ett tungt fordon. Experiment visar att det föreslagna ramverket upptäcker lastbilar, bussar, bilar, fotgängare och även mindre objekt om de är större än 61x41x40 cm. Detekteringsavståndet varierar beroende på storleken på ett objekt så att stora objekt (lastbilar och bussar) detekteras inom 25 m medan bilar och fotgängare detekteras inom 18 m respektive 15 m på motsvarande sätt. Ramverkets totala precision för objektspårning är 79%.
26

Thermal-RGB Sensory Data for Reliable and Robust Perception

El Ahmar, Wassim 29 November 2023 (has links)
The significant advancements and breakthroughs achieved in Machine Learning (ML) have revolutionized the field of Computer Vision (CV), where numerous real-world applications are now utilizing state-of-the-art advancements in the field. Advanced video surveillance and analytics, entertainment, and autonomous vehicles are a few examples that rely heavily on reliable and accurate perception systems. Deep learning usage in Computer Vision has come a long way since it sparked in 2012 with the introduction of Alexnet. Convolutional Neural Networks (CNN) have evolved to become more accurate and reliable. This is attributed to the advancements in GPU parallel processing, and to the recent availability of large scale and high quality annotated datasets that allow the training of complex models. However, ML models can only be as good as the data they train on and the data they receive in production. In real-world environments, a perception system often needs to be able to operate in different environments and conditions (weather, lighting, obstructions, etc.). As such, it is imperative for a perception system to utilize information from different types of sensors to mitigate the limitations of individual sensors. In this dissertation, we focus on studying the efficacy of using thermal sensors to enhance the robustness of perception systems. We focus on two common vision tasks: object detection and multiple object tracking. Through our work, we prove the viability of thermal sensors as a complement, and in some scenarios a replacement, to RGB cameras. For their important applications in autonomous vehicles and surveillance, we focus our research on pedestrian and vehicle perception. We also introduce the world's first (to the best of our knowledge) large scale dataset for pedestrian detection and tracking including thermal and corresponding RGB images.
27

Efficient CNN-based Object IDAssociation Model for Multiple ObjectTracking

Danesh, Parisasadat January 2023 (has links)
No description available.
28

Debris Tracking In A Semistable Background

Vanumamalai, KarthikKalathi 01 January 2005 (has links)
Object Tracking plays a very pivotal role in many computer vision applications such as video surveillance, human gesture recognition and object based video compressions such as MPEG-4. Automatic detection of any moving object and tracking its motion is always an important topic of computer vision and robotic fields. This thesis deals with the problem of detecting the presence of debris or any other unexpected objects in footage obtained during spacecraft launches, and this poses a challenge because of the non-stationary background. When the background is stationary, moving objects can be detected by frame differencing. Therefore there is a need for background stabilization before tracking any moving object in the scene. Here two problems are considered and in both footage from Space shuttle launch is considered with the objective to track any debris falling from the Shuttle. The proposed method registers two consecutive frames using FFT based image registration where the amount of transformation parameters (translation, rotation) is calculated automatically. This information is the next passed to a Kalman filtering stage which produces a mask image that is used to find high intensity areas which are of potential interest.
29

Multi-Template Temporal Siamese Network for Visual Object Tracking

Sekhavati, Ali 04 January 2023 (has links)
Visual object tracking is the task of giving a unique ID to an object in a video frame, understanding whether it is present or not in a current frame and if it is present, precisely localizing its position. There are numerous challenges in object tracking, such as change of illumination, partial or full occlusion, change of target appearance, blurring caused by camera movement, presence of similar objects to the target, changes in video image quality through time, etc. Due to these challenges, traditional computer vision techniques cannot perform high-quality tracking, especially for long-term tracking. Almost all the state-of-the-art methods in object tracking use artificial intelligence nowadays, and more specifically, Convolutional Neural Networks. In this work, we present a Siamese based tracker which is different from previous works in two ways. Firstly, most of the Siamese based trackers takes the target in the first frame as the ground truth. Despite the success of such methods in previous years, it does not guarantee robust tracking as it cannot handle many of the challenges causing change in target appearance, such as blurring caused by camera movement, occlusion, pose variation, etc. In this work, while keeping the first frame as a template, we add five other additional templates that are dynamically updated and replaced considering target classification score in different frames. Diversity, similarity and recency are criteria to choose the members of the bag. We call it as a bag of dynamic templates. Secondly, many Siamese based trackers are vulnerable to mistakenly tracking another similar looking object instead of the intended target. Many researchers proposed computationally expensive approaches, such as tracking all the distractors and the given target and discriminate them in every frame. In this work, we propose an approach to handle this issue by estimate the next frame position by using the target's bounding box coordinates in previous frames. We use temporal network with past history of several previous frames, measure classification scores of candidates considering templates in the bag of dynamic templates and use tracker sequential confidence value which shows how confident the tracker has been in previous frames. We call it as robustifier that prevents the tracker from continuously switching between the target and possible distractors with this hypothesis in mind. Extensive experiments on OTB 50, OTB 100 and UAV20L datasets demonstrate the superiority of our work over the state-of-the-art methods.
30

Surveillance in a Smart Home Environment

Patrick, Ryan Stewart 08 July 2010 (has links)
No description available.

Page generated in 0.0663 seconds