• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 125
  • 22
  • 13
  • 12
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 218
  • 218
  • 88
  • 70
  • 58
  • 53
  • 39
  • 35
  • 34
  • 33
  • 28
  • 27
  • 26
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Goal-Aware Robocentric Mapping and Navigation of a Quadrotor Unmanned Aerial Vehicle

Biswas, Srijanee 18 June 2019 (has links)
No description available.
132

Automatic object detection and tracking for eye-tracking analysis

Cederin, Liv, Bremberg, Ulrika January 2023 (has links)
In recent years, eye-tracking technology has gained considerable attention, facilitating analysis of gaze behavior and human visual attention. However, eye-tracking analysis often requires manual annotation on the objects being gazed upon, making quantitative data analysis a difficult and time-consuming process. This thesis explores the area of object detection and object tracking applied on scene camera footage from mobile eye-tracking glasses. We have evaluated the performance of state-of-the-art object detectors and trackers, resulting in an automated pipeline specialized at detecting and tracking objects in scene videos. Motion blur constitutes a significant challenge in moving cameras, complicating tasks such as object detection and tracking. To address this, we explored two approaches. The first involved retraining object detection models on datasets with augmented motion-blurred images, while the second one involved preprocessing the video frames with deblurring techniques. The findings of our research contributes with insights into efficient approaches to optimally detect and track objects in scene camera footage from eye-tracking glasses. Out of the technologies we tested, we found that motion deblurring using DeblurGAN-v2, along with a DINO object detector combined with the StrongSORT tracker, achieved the highest accuracies. Furthermore, we present an annotated dataset consisting of frames from recordings with eye-tracking glasses, that can be utilized for evaluating object detection and tracking performance.
133

Scene Understanding For Real Time Processing Of Queries Over Big Data Streaming Video

Aved, Alexander 01 January 2013 (has links)
With heightened security concerns across the globe and the increasing need to monitor, preserve and protect infrastructure and public spaces to ensure proper operation, quality assurance and safety, numerous video cameras have been deployed. Accordingly, they also need to be monitored effectively and efficiently. However, relying on human operators to constantly monitor all the video streams is not scalable or cost effective. Humans can become subjective, fatigued, even exhibit bias and it is difficult to maintain high levels of vigilance when capturing, searching and recognizing events that occur infrequently or in isolation. These limitations are addressed in the Live Video Database Management System (LVDBMS), a framework for managing and processing live motion imagery data. It enables rapid development of video surveillance software much like traditional database applications are developed today. Such developed video stream processing applications and ad hoc queries are able to "reuse" advanced image processing techniques that have been developed. This results in lower software development and maintenance costs. Furthermore, the LVDBMS can be intensively tested to ensure consistent quality across all associated video database applications. Its intrinsic privacy framework facilitates a formalized approach to the specification and enforcement of verifiable privacy policies. This is an important step towards enabling a general privacy certification for video surveillance systems by leveraging a standardized privacy specification language. With the potential to impact many important fields ranging from security and assembly line monitoring to wildlife studies and the environment, the broader impact of this work is clear. The privacy framework protects the general public from abusive use of surveillance technology; iii success in addressing the "trust" issue will enable many new surveillance-related applications. Although this research focuses on video surveillance, the proposed framework has the potential to support many video-based analytical applications.
134

Depth based Sensor Fusion in Object Detection and Tracking

Sikdar, Ankita 01 June 2018 (has links)
No description available.
135

Color Feature Integration with Directional Ringlet Intensity Feature Transform for Enhanced Object Tracking

Geary, Kevin Thomas January 2016 (has links)
No description available.
136

AI-assisterad spårning av flygande objekt och distansberäkning inom kastgrenar / AI-assisted Tracking of Flying Objects and Distance Measuring within Throwing Sports

Jonsson, Fredrik, Eriksson, Jesper January 2022 (has links)
Detta examensarbete har utförts under tio veckor på uppdrag av företaget BitSim NOW. Den manuella metod som idag används för mätning av stötar inom kulstötning kan utgöra en risk för felaktiga resultat och personskador. Med hjälp av tekniska hjälpmedel kan en lösning med noggrannare mätningar och lägre risk för skador implementeras i sporten kulstötning. Denna rapport presenterar en lösning som med hjälp av artificiell intelligens identifierar kulan utifrån en filmsekvens. Därefter beräknas längden av stöten med hjälp av en formel för kastparabeln. Lösningen jämförs sedan med en metod utan artificiell intelligens för att fastställa den bästa av de två metoderna. De variablersom jämfördes var noggrannheten på stötens längd och hur bra de två olika metoderna spårade kulan. Resultatet analyserades i relation till de uppsatta målen och sattes därefter in i ett större sammanhang. / This thesis project has been done during ten weeks on behalf of the companyBitSim NOW. The current method used to measure the length of shot-puts presents a risk of inaccurate results along with the risk of injury for the measuring personnel. With the help of technical aids, a solution with more accurate measurements and a lower risk for injuries could be implemented in the sport of shot-puts. This report presents a solution using artificial intelligence to first identify the shotin video films and secondly calculate the length using mathematical formulas. Thesolution is then compared to a method that does not use artificial intelligence, to determine what method is the superior one. The parameters that were compared were the accuracy of the length and the quality of the tracking. The result was analyzed in relation to the aims of the project and then put into a larger context.
137

Assisted Annotation of Sequential Image Data With CNN and Pixel Tracking / Assisterande annotering av sekvensiell bilddata med CNN och pixelspårning

Chan, Jenny January 2021 (has links)
In this master thesis, different neural networks have investigated annotating objects in video streams with partially annotated data as input. Annotation in this thesis is referring to bounding boxes around the targeted objects. Two different methods have been used ROLO and GOTURN, object detection with tracking respective object tracking with pixels. The data set used for validation is surveillance footage consists of varying image resolution, image size and sequence length. Modifications of the original models have been executed to fit the test data.  Promising results for modified GOTURN were shown, where the partially annotated data was used as assistance in tracking. The model is robust and provides sufficiently accurate object detections for practical use. With the new model, human resources for image annotation can be reduced by at least half. / I detta examensarbete har olika neurala nätverk undersökts för att annotera objekt i videoströmmar med partiellt annoterade data som indata. Annotering i denna uppsats syftar på avgränsninglådor runt de eftertraktade objekten. Två olika metoder har använts ROLO och GOTURN, objektdetektering med spårning respektive objektspårning av pixlar. Datasetet som användes för validering är videoströmmar från övervakningskameror i varierande bildupplösning, bildstorlek och sekvenslängd. Modifieringar av ursprungsmodellerna har utförts för att anpassa testdatat. Lovande resultat för modifierade GOTURN visades, där den partiella annoterade datan användes som assistans vid spårning. Modellen är robust och ger tillräckligt noggranna objektdetektioner för praktiskt bruk. Med den nya modellen kan mänskliga resurser för bild annotering reduceras med minst hälften.
138

Pedestrian Tracking by using Deep Neural Networks / Spårning av fotgängare med hjälp av Deep Neural Network

Peng, Zeng January 2021 (has links)
This project aims at using deep learning to solve the pedestrian tracking problem for Autonomous driving usage. The research area is in the domain of computer vision and deep learning. Multi-Object Tracking (MOT) aims at tracking multiple targets simultaneously in a video data. The main application scenarios of MOT are security monitoring and autonomous driving. In these scenarios, we often need to track many targets at the same time which is not possible with only object detection or single object tracking algorithms for their lack of stability and usability. Therefore we need to explore the area of multiple object tracking. The proposed method breaks the MOT into different stages and utilizes the motion and appearance information of targets to track them in the video data. We used three different object detectors to detect the pedestrians in frames, a person re-identification model as appearance feature extractor and Kalman filter as motion predictor. Our proposed model achieves 47.6% MOT accuracy and 53.2% in IDF1 score while the results obtained by the model without person re-identification module is only 44.8% and 45.8% respectively. Our experiment results indicate the fact that a robust multiple object tracking algorithm can be achieved by splitted tasks and improved by the representative DNN based appearance features. / Detta projekt syftar till att använda djupinlärning för att lösa problemet med att följa fotgängare för autonom körning. For ligger inom datorseende och djupinlärning. Multi-Objekt-följning (MOT) syftar till att följa flera mål samtidigt i videodata. de viktigaste applikationsscenarierna för MOT är säkerhetsövervakning och autonom körning. I dessa scenarier behöver vi ofta följa många mål samtidigt, vilket inte är möjligt med endast objektdetektering eller algoritmer för enkel följning av objekt för deras bristande stabilitet och användbarhet, därför måste utforska området för multipel objektspårning. Vår metod bryter MOT i olika steg och använder rörelse- och utseendinformation för mål för att spåra dem i videodata, vi använde tre olika objektdetektorer för att upptäcka fotgängare i ramar en personidentifieringsmodell som utseendefunktionsavskiljare och Kalmanfilter som rörelsesprediktor. Vår föreslagna modell uppnår 47,6 % MOT-noggrannhet och 53,2 % i IDF1 medan resultaten som erhållits av modellen utan personåteridentifieringsmodul är endast 44,8%respektive 45,8 %. Våra experimentresultat visade att den robusta algoritmen för multipel objektspårning kan uppnås genom delade uppgifter och förbättras av de representativa DNN-baserade utseendefunktionerna.
139

Optical Satellite/Component Tracking and Classification via Synthetic CNN Image Processing for Hardware-in-the-Loop testing and validation of Space Applications using free flying drone platforms

Peterson, Marco Anthony 21 April 2022 (has links)
The proliferation of reusable space vehicles has fundamentally changed how we inject assets into orbit and beyond, increasing the reliability and frequency of launches. Leading to the rapid development and adoption of new technologies into the Aerospace sector, such as computer vision (CV), machine learning (ML), and distributed networking. All these technologies are necessary to enable genuinely autonomous decision-making for space-borne platforms as our spacecraft travel further into the solar system, and our missions sets become more ambitious, requiring true ``human out of the loop" solutions for a wide range of engineering and operational problem sets. Deployment of systems proficient at classifying, tracking, capturing, and ultimately manipulating orbital assets and components for maintenance and assembly in the persistent dynamic environment of space and on the surface of other celestial bodies, tasks commonly referred to as On-Orbit Servicing and In Space Assembly, have a unique automation potential. Given the inherent dangers of manned space flight/extravehicular activity (EVAs) methods currently employed to perform spacecraft construction and maintenance tasking, coupled with the current limitation of long-duration human flight outside of low earth orbit, space robotics armed with generalized sensing and control machine learning architectures is a tremendous enabling technology. However, the large amounts of sensor data required to adequately train neural networks for these space domain tasks are either limited or non-existent, requiring alternate means of data collection/generation. Additionally, the wide-scale tools and methodologies required for hardware in the loop simulation, testing, and validation of these new technologies outside of multimillion-dollar facilities are largely in their developmental stages. This dissertation proposes a novel approach for simulating space-based computer vision sensing and robotic control using both physical and virtual reality testing environments. This methodology is designed to both be affordable and expandable, enabling hardware in the loop simulation and validation of space systems at large scale across multiple institutions. While the focus of the specific computer vision models in this paper are narrowly focused on solving imagery problems found on orbit, this work can be expanded to solve any problem set that requires robust onboard computer vision, robotic manipulation, and free flight capabilities. / Doctor of Philosophy / The lack of real-world imagery of space assets and planetary surfaces required to train neural networks to autonomously identify, classify, and perform decision-making in these environments is either limited, none existent, or prohibitively expensive to obtain. Leveraging the power of the unreal engine, motion capture, and theatre projections technologies combined with robotics, computer vision, and machine learning to provide a means to recreate these worlds for the purpose of optical machine learning testing and validation for space and other celestial applications. This dissertation also incorporates domain randomization methods to increase neural network performance for the above mentioned applications.
140

Sensory memory is allocated exclusively to the current event-segment

Tripathy, Srimant P., Ögmen, H. 19 December 2018 (has links)
Yes / The Atkinson-Shiffrin modal model forms the foundation of our understanding of human memory. It consists of three stores (Sensory Memory (SM), also called iconic memory, Short-Term Memory (STM), and Long-Term Memory (LTM)), each tuned to a different time-scale. Since its inception, the STM and LTM components of the modal model have undergone significant modifications, while SM has remained largely unchanged, representing a large capacity system funneling information into STM. In the laboratory, visual memory is usually tested by presenting a brief static stimulus and, after a delay, asking observers to report some aspect of the stimulus. However, under ecological viewing conditions, our visual system receives a continuous stream of inputs, which is segmented into distinct spatio-temporal segments, called events. Events are further segmented into event-segments. Here we show that SM is not an unspecific general funnel to STM but is allocated exclusively to the current event-segment. We used a Multiple-Object Tracking (MOT) paradigm in which observers were presented with disks moving in different directions, along bi-linear trajectories, i.e., linear trajectories, with a single deviation in direction at the mid-point of each trajectory. The synchronized deviation of all of the trajectories produced an event stimulus consisting of two event-segments. Observers reported the pre-deviation or the post-deviation directions of the trajectories. By analyzing observers' responses in partial- and full-report conditions, we investigated the involvement of SM for the two event-segments. The hallmarks of SM hold only for the current event segment. As the large capacity SM stores only items involved in the current event-segment, the need for event-tagging in SM is eliminated, speeding up processing in active vision. By characterizing how memory systems are interfaced with ecological events, this new model extends the Atkinson-Shiffrin model by specifying how events are stored in the first stage of multi-store memory systems.

Page generated in 0.1132 seconds