• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 122
  • 22
  • 13
  • 12
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 215
  • 215
  • 85
  • 68
  • 56
  • 52
  • 39
  • 35
  • 33
  • 33
  • 28
  • 27
  • 26
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

3D Video Capture of a Moving Object in a Wide Area Using Active Cameras / 能動カメラ群を用いた広域移動対象の3次元ビデオ撮影

Yamaguchi, Tatsuhisa 24 September 2013 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第17919号 / 情博第501号 / 新制||情||89(附属図書館) / 30739 / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 松山 隆司, 教授 美濃 導彦, 教授 中村 裕一 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
22

Adaptive Fusion Approach for Multiple Feature Object Tracking

Krieger, Evan January 2018 (has links)
No description available.
23

Fully Transparent Computer Vision Framework for Ship Detection and Tracking in Satellite Imagery

Gottweis, Jason T. January 2018 (has links)
No description available.
24

Real-Time Multiple Object Tracking : A Study on the Importance of Speed / Identifiering av rörliga objekt i realtid

Murray, Samuel January 2017 (has links)
Multiple object tracking consists of detecting and identifying objects in video. In some applications, such as robotics and surveillance, it is desired that the tracking is performed in real-time. This poses a challenge in that it requires the algorithm to run as fast as the frame-rate of the video. Today's top performing tracking methods run at only a few frames per second, and can thus not be used in real-time. Further, when determining the speed of the tracker, it is common to not include the time it takes to detect objects. We argue that this way of measuring speed is not relevant for robotics or embedded systems, where the detecting of objects is done on the same machine as the tracking. We propose that one way of running a method in real-time is to not look at every frame, but skip frames to make the video have the same frame-rate as the tracking method. However, we believe that this will lead to decreased performance. In this project, we implement a multiple object tracker, following the tracking-by-detection paradigm, as an extension of an existing method. It works by modelling the movement of objects by solving the filtering problem, and associating detections with predicted new locations in new frames using the Hungarian algorithm. Three different similarity measures are used, which use the location and shape of the bounding boxes. Compared to other trackers on the MOTChallenge leaderboard, our method, referred to as C++SORT, is the fastest non-anonymous submission, while also achieving decent score on other metrics. By running our model on the Okutama-Action dataset, sampled at different frame-rates, we show that the performance is greatly reduced when running the model - including detecting objects - in real-time. In most metrics, the score is reduced by 50%, but in certain cases as much as 90%. We argue that this indicates that other, slower methods could not be used for tracking in real-time, but that more research is required specifically on this. / För att spåra rörliga objekt i video (eng: multiple object tracking) krävs att man lokaliserar och identifierar dem. I vissa tillämpningar, såsom robotik och övervakning, kan det krävas att detta görs i realtid, vilket kan vara svårt i praktiken, då det förutsätter att algoritmen kan köras lika fort som videons bildfrekvensen. De kraftfullaste algoritmerna idag kan bara analysera ett fåtal bildrutor per sekund, och lämpar sig därför inte för realtidsanvändning. Dessutom brukar tiden per bildruta inte inkludera den tid det tar att lokalisera objekt, när hastigheten av en algoritm presenteras. Vi anser att det sättet att beräkna hastigheten inte är lämpligt inom robotik eller inbyggda system, där lokaliseringen och identifiering av objekt sker på samma maskin. Många algoritmer kan köras i realtid genom att hoppa över det antal bildrutor i videon som krävs för att bildfrekvensen ska bli densamma som algoritmens frekvens. Dock tror vi att detta leder till sämre prestanda. I det här projektet implementerar vi en algoritm för att identifiera rörliga objekt. Vår algoritm bygger på befintliga metoder inom paradigmen tracking-by-detection (ung. spårning genom detektion). Algoritmen uppskattar hastigheten hos varje objekt genom att lösa ett filtreringsproblem. Utifrån hastigheten beräknas en förväntad ny position, som kopplas till nya observationer med hjälp av Kuhn–Munkres algoritm. Tre olika likhetsmått används, som på olika sätt kombinerar positionen för och formen på objekten. Vår metod, C++SORT, är den snabbaste icke-anonyma metoden publicerad på MOTChallenge. Samtidigt presterar den bra enligt flera andra mått. Genom att testa vår algoritm på video från Okutama-Action, med varierande bildfrekvens, kan vi visa att prestandan sjunker kraftigt när hela modellen - inklusive att lokalisera objekt - körs i realtid. Prestandan enligt de flesta måtten sjunker med 50%, men i vissa fall med så mycket som 90%. Detta tyder på att andra, långsammare metoder inte kan användas i realtid, utan att mer forskning, specifikt inriktad på spårning i realtid, behövs.
25

Detection and tracking of unknown objects on the road based on sparse LiDAR data for heavy duty vehicles / Upptäckt och spårning av okända objekt på vägen baserat på glesa LiDAR-data för tunga fordon

Shilo, Albina January 2018 (has links)
Environment perception within autonomous driving aims to provide a comprehensive and accurate model of the surrounding environment based on information from sensors. For the model to be comprehensive it must provide the kinematic state of surrounding objects. The existing approaches of object detection and tracking (estimation of kinematic state) are developed for dense 3D LiDAR data from a sensor mounted on a car. However, it is a challenge to design a robust detection and tracking algorithm for sparse 3D LiDAR data. Therefore, in this thesis we propose a framework for detection and tracking of unknown objects using sparse VLP-16 LiDAR data which is mounted on a heavy duty vehicle. Experiments reveal that the proposed framework performs well detecting trucks, buses, cars, pedestrians and even smaller objects of a size bigger than 61x41x40 cm. The detection distance range depends on the size of an object such that large objects (trucks and buses) are detected within 25 m while cars and pedestrians within 18 m and 15 m correspondingly. The overall multiple objecttracking accuracy of the framework is 79%. / Miljöperception inom autonom körning syftar till att ge en heltäckande och korrekt modell av den omgivande miljön baserat på information från sensorer. För att modellen ska vara heltäckande måste den ge information om tillstånden hos omgivande objekt. Den befintliga metoden för objektidentifiering och spårning (uppskattning av kinematiskt tillstånd) utvecklas för täta 3D-LIDAR-data från en sensor monterad på en bil. Det är dock en utmaning att designa en robust detektions och spårningsalgoritm för glesa 3D-LIDAR-data. Därför föreslår vi ett ramverk för upptäckt och spårning av okända objekt med hjälp av gles VLP-16-LIDAR-data som är monterat på ett tungt fordon. Experiment visar att det föreslagna ramverket upptäcker lastbilar, bussar, bilar, fotgängare och även mindre objekt om de är större än 61x41x40 cm. Detekteringsavståndet varierar beroende på storleken på ett objekt så att stora objekt (lastbilar och bussar) detekteras inom 25 m medan bilar och fotgängare detekteras inom 18 m respektive 15 m på motsvarande sätt. Ramverkets totala precision för objektspårning är 79%.
26

Severe loss of positional information when detecting deviations in multiple trajectories

Tripathy, Srimant P., Barrett, Brendan T. January 2004 (has links)
No / Human observers can simultaneously track up to five targets in motion (Z. W. Pylyshyn & R. W. Storm, 1988). We examined the precision for detecting deviations in linear trajectories by measuring deviation thresholds as a function of the number of trajectories (T ). When all trajectories in the stimulus undergo the same deviation, thresholds are uninfluenced by T for T <= 10. When only one of the trajectories undergoes a deviation, thresholds rise steeply as T is increased [e.g., 3.3º (T = 1), 12.3º (T = 2), 32.9º (T = 4) for one observer]; observers are unable to simultaneously process more than one trajectory in our threshold-measuring paradigm. When the deviating trajectory is cued (e.g., using a different color), varying T has little influence on deviation threshold. The use of a different color for each trajectory does not facilitate deviation detection. Our current data suggest that for deviations that have low discriminability (i.e., close to threshold) the number of trajectories that can be monitored effectively is close to one. In contrast, when the stimuli containing highly discriminable (i.e., substantially suprathreshold) deviations are used, as many as three or four trajectories can be simultaneously monitored (S. P. Tripathy, 2003). Our results highlight a severe loss of positional information when attempting to track multiple objects, particularly in a threshold paradigm.
27

Is the ability to identify deviations in multiple trajectories compromised by amblyopia?

Tripathy, Srimant P., Levi, D.M. January 2006 (has links)
No / Amblyopia results in a severe loss of positional information and in the ability to accurately enumerate objects (V. Sharma, D. M. Levi, & S. A. Klein, 2000). In this study, we asked whether amblyopia also disrupts the ability to track a near-threshold change in the trajectory of a single target amongst multiple similar potential targets. In the first experiment, we examined the precision for detecting a deviation in the linear motion trajectory of a dot by measuring deviation thresholds as a function of the number of moving trajectories (T). As in normal observers, we found that in both eyes of amblyopes, threshold increases steeply as T increases from 1 to 4. Surprisingly, for T = 1-4, thresholds were essentially identical in both eyes of the amblyopes and were similar to those of normal observers. In a second experiment, we measured the precision for detecting a deviation in the orientation of a static, bilinear "trajectory" by again measuring deviation thresholds (i.e., angle discrimination) as a function of the number of oriented line "trajectories" (T). Relative to the nonamblyopic eye, amblyopes show a marked threshold elevation for a static target when T = 1. However, thresholds increased with T with approximately the same slope as in their preferred eye and in the eyes of the normal controls. We conclude that while amblyopia disrupts static angle discrimination, amblyopic dynamic deviation detection thresholds are normal or very nearly so.
28

Thermal-RGB Sensory Data for Reliable and Robust Perception

El Ahmar, Wassim 29 November 2023 (has links)
The significant advancements and breakthroughs achieved in Machine Learning (ML) have revolutionized the field of Computer Vision (CV), where numerous real-world applications are now utilizing state-of-the-art advancements in the field. Advanced video surveillance and analytics, entertainment, and autonomous vehicles are a few examples that rely heavily on reliable and accurate perception systems. Deep learning usage in Computer Vision has come a long way since it sparked in 2012 with the introduction of Alexnet. Convolutional Neural Networks (CNN) have evolved to become more accurate and reliable. This is attributed to the advancements in GPU parallel processing, and to the recent availability of large scale and high quality annotated datasets that allow the training of complex models. However, ML models can only be as good as the data they train on and the data they receive in production. In real-world environments, a perception system often needs to be able to operate in different environments and conditions (weather, lighting, obstructions, etc.). As such, it is imperative for a perception system to utilize information from different types of sensors to mitigate the limitations of individual sensors. In this dissertation, we focus on studying the efficacy of using thermal sensors to enhance the robustness of perception systems. We focus on two common vision tasks: object detection and multiple object tracking. Through our work, we prove the viability of thermal sensors as a complement, and in some scenarios a replacement, to RGB cameras. For their important applications in autonomous vehicles and surveillance, we focus our research on pedestrian and vehicle perception. We also introduce the world's first (to the best of our knowledge) large scale dataset for pedestrian detection and tracking including thermal and corresponding RGB images.
29

Efficient CNN-based Object IDAssociation Model for Multiple ObjectTracking

Danesh, Parisasadat January 2023 (has links)
No description available.
30

Debris Tracking In A Semistable Background

Vanumamalai, KarthikKalathi 01 January 2005 (has links)
Object Tracking plays a very pivotal role in many computer vision applications such as video surveillance, human gesture recognition and object based video compressions such as MPEG-4. Automatic detection of any moving object and tracking its motion is always an important topic of computer vision and robotic fields. This thesis deals with the problem of detecting the presence of debris or any other unexpected objects in footage obtained during spacecraft launches, and this poses a challenge because of the non-stationary background. When the background is stationary, moving objects can be detected by frame differencing. Therefore there is a need for background stabilization before tracking any moving object in the scene. Here two problems are considered and in both footage from Space shuttle launch is considered with the objective to track any debris falling from the Shuttle. The proposed method registers two consecutive frames using FFT based image registration where the amount of transformation parameters (translation, rotation) is calculated automatically. This information is the next passed to a Kalman filtering stage which produces a mask image that is used to find high intensity areas which are of potential interest.

Page generated in 0.0659 seconds