Spelling suggestions: "subject:"objecttracking"" "subject:"bactracking""
81 |
Development of Dropwise Additive Manufacturing with non-Brownian Suspensions: Applications of Computer Vision and Bayesian Modeling to Process Design, Monitoring and Control: Video Files in Chapter 5 and Appendix EAndrew J. Radcliffe (9080312) 24 July 2020 (has links)
Video files found in Chapter 5. : AUTOMATED OBJECT TRACKING, EVENT DETECTION AND RECOGNITION FOR HIGH-SPEED VIDEO OF DROP FORMATION PHENOMENA.<div><br></div><div>Video files found in APPENDIX E. CHAPTER 5, RESOURCE 2.</div>
|
82 |
Multi-Modal Visual Tracking Using Infrared ImageryWettermark, Emma, Berglund, Linda January 2021 (has links)
Generic visual object tracking is the task of tracking one or several objects in all frames in a video, knowing only the location and size of the target in the initial frame. Visual tracking can be carried out in both the infrared and the visual spectrum simultaneously, this is known as multi-modal tracking. Utilizing both spectra can result in a more diverse tracker since visual tracking in infrared imagery makes it possible to detect objects even in poor visibility or in complete darkness. However, infrared imagery lacks the number of details that are present in visual images. A common method for visual tracking is to use discriminative correlation filters (DCF). These correlation filters are then used to detect an object in every frame of an image sequence. This thesis focuses on investigating aspects of a DCF based tracker, operating in the two different modalities, infrared and visual imagery. First, it was investigated whether the tracking benefits from using two channels instead of one and what happens to the tracking result if one of those channels is degraded by an external cause. It was also investigated if the addition of image features can further improve the tracking. The result shows that the tracking improves when using two channels instead of only using a single channel. It also shows that utilizing two channels is a good way to create a robust tracker, which is still able to perform even though one of the channels is degraded. Using deep features, extracted from a pre-trained convolutional neural network, was the image feature improving the tracking the most, although the implementation of the deep features made the tracking significantly slower.
|
83 |
Efficient Multi-Object Tracking On Unmanned Aerial VehicleXiao Hu (12469473) 27 April 2022 (has links)
<p>Multi-object tracking has been well studied in the field of computer vision. Meanwhile, with the advancement of the Unmanned Aerial Vehicles (UAV) technology, the flexibility and accessibility of UAV draws research attention to deploy multi-object tracking on UAV. The conventional solutions usually adapt using the "tracking-by-detection" paradigm. Such a paradigm has the structure where tracking is achieved through detecting objects in consecutive frames and then associating them with re-identification. However, the dynamic background, crowded small objects, and limited computational resources make multi-object tracking on UAV more challenging. Providing energy-efficient multi-object tracking solutions on the drone-captured video is critically demanded by research community. </p>
<p> </p>
<p>To stimulate innovation in both industry and academia, we organized the 2021 Low-Power Computer Vision Challenge with a UAV Video track focusing on multi-class multi-object tracking with customized UAV video. This thesis analyzes the qualified submissions of 17 different teams and provides a detailed analysis of the best solution. Methods and future directions for energy-efficient AI and computer vision research are discussed. The solutions and insights presented in this thesis are expected to facilitate future research and applications in the field of low-power vision on UAV.</p>
<p> </p>
<p>With the knowledge gathered from the submissions, an optical flow oriented multi-object tracking framework, named OF-MOT, is proposed to address the similar problem with a more realistic drone-captured video dataset. OF-MOT uses the motion information of each detected object of the previous frame to detect the current frame, then applies a customized object tracker using the motion information to associate the detected instances. OF-MOT is evaluated on a drone-captured video dataset and achieves 24 FPS with 17\% accuracy on a modern GPU Titan X, showing that the optical flow can effectively improve the multi-object tracking.</p>
<p> </p>
<p>Both competition results analysis and OF-MOT provide insights or experiment results regarding deploying multi-object tracking on UAV. We hope these findings will facilitate future research and applications in the field of UAV vision.</p>
|
84 |
Aging, Object-Based Inhibition, and Online Data CollectionHuether, Asenath Xochitl Arauza January 2020 (has links)
Visual selective attention operates in space- and object-based frames of reference. Stimulus salience and task demands influence whether a space- or object-based frame of reference guides attention. I conducted two experiments for the present dissertation to evaluate age patterns in the role of inhibition in object-based attention. The biased competition account (Desimone & Duncan, 1995) proposes that one mechanism through which targets are selected is through suppression of irrelevant stimuli. The inhibitory deficit hypothesis (Hasher & Zacks, 1988) predicts that older adults do not appropriately suppress or ignore irrelevant information. The purpose of the first study was to evaluate whether inhibition of return (IOR) patterns, originally found in a laboratory setting, could be replicated with online data collection (prompted by the COVID-19 pandemic). Inhibition of return is a cognitive mechanism to bias attention from returning to previously engaged items. In a lab setting, young and older adults produced location- and object-based IOR. In the current study, both types of IOR were also observed within object boundaries, although location-based IOR from data collected online was smaller than that from the laboratory. In addition, there was no evidence of an age-related reduction in IOR effects. There was some indication that sampling differences or testing circumstances led to increased variability in online data.The purpose of the second study was to evaluate age differences in top-down inhibitory processes during an attention-demanding object tracking task. Data were collected online. I used a dot-probe multiple object tracking (MOT) task to evaluate distractor suppression during target tracking. Both young and older adults showed poorer dot-probe detection accuracies when the probes appeared on distractors compared to when they appeared at empty locations, reflecting inhibition. The findings suggest that top-down inhibition works to suppress distractors during target tracking and that older adults show a relatively preserved ability to inhibit distractor objects. The findings across both experiments support models of selective attention that posit that goal-related biases suppress distractor information and that inhibition can be directed selectively by both young and older adults on locations and objects in the visual field.
|
85 |
Limitations of visuospatial attention (and how to circumvent them)Wahn, Basil 15 May 2017 (has links)
In daily life, humans are bombarded with visual input. Yet, their attentional capacities for processing this input are severely limited. Several studies, including my own, have investigated factors that influence these attentional limitations and have identified methods to circumvent them. In the present thesis, I provide a review of my own and others' findings. I first review studies that have demonstrated limitations of visuospatial attention and investigated physiological correlates of these limitations. I then turn to studies in multisensory research that have explored whether limitations in visuospatial attention can be circumvented by distributing information processing across several sensory modalities. Finally, I discuss research from the field of joint action that has investigated how limitations of visuospatial attention can be circumvented by distributing task demands across people and providing them with multisensory input. Based on the reviewed studies, I conclude that limitations of visuospatial attention can be circumvented by distributing attentional processing across sensory modalities when tasks involve spatial as well as object-based attentional processing. However, if only spatial attentional processing is required, limitations of visuospatial attention cannot be circumvented by distributing attentional processing. These findings from multisensory research are applicable to visuospatial tasks that are performed jointly by two individuals. That is, in a joint visuospatial task that does require object-based as well as spatial attentional processing, joint performance is facilitated when task demands are distributed across sensory modalities. Future research could further investigate how applying findings from multisensory research to joint action research may potentially facilitate joint performance. Generally, these findings are applicable to real-world scenarios such as aviation or car-driving to circumvent limitations of visuospatial attention.
|
86 |
Object Trajectory Estimation Using Optical FlowLiu, Shuo 01 May 2009 (has links)
Object trajectory tracking is an important topic in many different areas. It is widely used in robot technology, traffic, movie industry, and others. Optical flow is a useful method in the object tracking branch and it can calculate the motion of each pixel between two frames, and thus it provides a possible way to get the trajectory of objects. There are numerous papers describing the implementation of optical flow. Some results are acceptable, but in many projects, there are limitations. In most previous applications, because the camera is usually static, it is easy to apply optical flow to identify the moving targets in a scene and get their trajectories. When the camera moves, a global motion will be added to the local motion, which complicates the issue. In this thesis we use a combination of optical flow and image correlation to deal with this problem, and have good experimental results. For trajectory estimation, we incorporate a Kalman Filter with the optical flow. Not only can we smooth the motion history, but we can also estimate the motion into the next frame. The addition of a spatial-temporal filter improves the results in our later process.
|
87 |
Fimp-bot : Robot för upplockning av cigarettfimpar / Fimp-bot : Robot for collection of cigarette buttsGeiberger, Philipp, Hanna, Ivan January 2019 (has links)
Cirka en miljard cigarettfimpar slängs på svenska gator varje år. Detta leder till risker för både människan och miljön som skulle kunna minskas betydligt med robotar som städar bort fimparna. Denna rapport behandlar ett konstruktionsförslag för en robot som utför just denna uppgift. Med tanke på givna begränsningar utvecklas endast funktionaliteten som krävs för att roboten skall kunna utföra sin uppgift i en idealiserad inomhusmiljö. En prototyp konstrueras för att kunna undersöka precisionen av detektering och upplockning av cigarettfimpar med den valda konstruktionen. För att kunna hitta cigarettfimpar drivs och styrs prototypen med två DC-motorer som är kopplade till separata H-bryggor. En ultraljudssensor upptäcker stora hinder så att prototypen kan undvika dessa. För detekteringen av cigarettfimparna används en Pixy-kamera som identifierar objekt genom att beräkna deras färgsignatur. Upptäcks en cigarettfimp styr prototypen mot denna. Sedan plockas fimpen upp med en mekanism baserad på servomotorer som styrs med en Arduino Uno mikrokontroller. Mekanismen består av en ramp med dörr som fälls ned och genom rampen leder in cigarettfimpen i en behållare. Konstruktionen har till stor del skapats med flertal elektriska komponenter, en byggsats samt i Solid Edge skapade och i 3D-skrivaren Ultimaker utskrivna delar. Resultat av utförda tester visar att Pixy-kameran är en svag punkt då den är väldigt ljuskänslig. Det är också mycket svårare för den att detektera fimpar i standardfärgen orange relativt röda fimpar. Tester av upplockning i goda ljusförhållanden och med röda cigarettfimpar ger 78% framgångskvot vilket visar på att konstruktionen fungerar väl. En framtida utvecklingsmöjlighet som minskade kvoten var dock att prototypen var opålitlig på att köra rakt. / Approximately one billion cigarette butts are thrown onto Swedish streets each year. This leads to risks both for the human and the environment that could be reduced considerably with robots that clean up the cigarette butts. This report deals with a construction proposal for a robot that performs exactly this task. Considering the restraints that this project faces only the functionality is developed that is required for the robot to be able to perform the task in an idealised indoor-environment. A prototype is constructed to be able to examine the precision of detection and collection of the cigarette butts for the chosen construction. To find cigarette butts the prototype is driving and steering with two DC-motors that are connected to separate H-bridges. A ultrasonic sensor detects large obstacles for the prototype to be able to avoid them. For detection of cigarette butts a Pixy-camera is used that identifies objects by calculating their colour signature. When a cigarette butt is detected the prototype steers towards it. Then it picks up the cigarette butt with a mechanism working with servomotors that are controlled by an Arduino Uno microcontroller. This mechanism is made up of a ramp with door that is tilted down onto the ground and leads the butt into a container.The construction was built using mainly multiple electric components, a building kit and parts designed in Solid Edge and 3D-printed in Ultimaker. Results of conducted tests show that the Pixy-camera is a weak spot as it is very light sensitive. Furthermore it is much harder for the camera to detect cigarette butts in standard colour orange compared to red cigarette butts. Tests of the cigarette butt collection performance showed a success ratio of 78% which shows that the construction works well. A future development for the prototype that lowered the success ratio was that it was unreliable at driving straight forward.
|
88 |
Multi-Vehicle Detection and Tracking in Traffic Videos Obtained from UAVsBalusu, Anusha 29 October 2020 (has links)
No description available.
|
89 |
Comparison of camera data types for AI tracking of humans in indoor combat trainingZenk, Viktor, Bach, Willy January 2022 (has links)
Multiple object tracking (MOT) can be an efficient tool for finding patterns in video monitoring data. In this thesis, we investigate which type of video data works best for MOT in an indoor combat training scenario. The three types of camera data evaluated are color data, near-infrared (NIR) data, and depth data. In order to evaluate which of these lend themselves best for MOT, we develop object tracking models based on YOLOv5 and DeepSORT, and train the models on the respective types of data. In addition to the individual models, ensembles of the three models are also developed, to see if any increase in performance can be gained. The models are evaluated using the well-established MOT evaluation metrics, as well as studying the frame rate performance of each model. The results are rigorously analyzed using statistical significance tests, to ensure only well-supported conclusions are drawn. These evaluations and analyses show mixed results. Regarding the MOT metrics, the performance of most models were not shown to be significantly different from most other models, so while a difference in performance was observed, it cannot be assumed to hold over larger sample sizes. Regarding frame rate, we find that the ensemble models are significantly slower than the individual models on their own.
|
90 |
Collision Avoidance for Complex and Dynamic Obstacles : A study for warehouse safetyLjungberg, Sandra, Brandås, Ester January 2022 (has links)
Today a group of automated guided vehicles at Toyota Material Handling Manufacturing Sweden detect and avoid objects primarily by using 2D-LiDAR, with shortcomings being the limitation of only scanning the area in a 2D plane and missing objects close to the ground. Several dynamic obstacles exist in the environment of the vehicles. Protruding forks are one such obstacle, impossible to detect and avoid with the current choice of sensor and its placement. This thesis investigates possible solutions and limitations of using a single RGB camera for obstacle detection, tracking, and avoidance. The obstacle detection uses the deep learning model YOLOv5s. A solution for semi-automatic data gathering and labeling is designed, and pre-trained weights are chosen to minimize the amount of labeled data needed. Two different approaches are implemented for the tracking of the object. The YOLOv5s detection is the foundation of the first, where 2D-bounding boxes are used as measurements in an Extended Kalman Filter (EKF). Fiducial markers build up the second approach, used as measurements in another EKF. A state lattice motion planner is designed to find a feasible path around the detected obstacle. The chosen graph search algorithm is ARA*, designed to initially find a suboptimal path and improve it if time allows. The detection works successfully with an average precision of 0.714. The filter using 2D-bounding boxes can not differentiate between a clockwise and counterclockwise rotation, but the performance is improved when a measurement of rotation is included. Using ARA* in the motion planner, the solution sufficiently avoids the obstacles.
|
Page generated in 0.0823 seconds