Spelling suggestions: "subject:"coving abject cracking"" "subject:"coving abject fracking""
1 |
Visual Detection And Tracking Of Moving ObjectsErgezer, Hamza 01 November 2007 (has links) (PDF)
In this study, primary steps of a visual surveillance system are presented: moving
object detection and tracking of these moving objects. Background subtraction has
been performed to detect the moving objects in the video, which has been taken
from a static camera. Four methods, frame differencing, running (moving)
average, eigenbackground subtraction and mixture of Gaussians, have been used
in the background subtraction process. After background subtraction, using some
additional operations, such as morphological operations and connected component
analysis, the objects to be tracked have been acquired. While tracking the moving
objects, active contour models (snakes) has been used as one of the approaches. In
addition to this method / Kalman tracker and mean-shift tracker are other
approaches which have been utilized. A new approach has been proposed for the
problem of tracking multiple targets. We have implemented this method for single
and multiple camera configurations. Multiple cameras have been used to augment
the measurements. Homography matrix has been calculated to find the correspondence between cameras. Then, measurements and tracks have been
associated by the new tracking method.
|
2 |
Multiple Target Tracking Using Multiple CamerasYilmaz, Mehmet 01 May 2008 (has links) (PDF)
Video surveillance has long been in use to monitor security sensitive areas such as banks, department stores, crowded public places and borders. The rise in computer speed, availability of cheap large-capacity storage devices and high speed network infrastructure enabled the way for cheaper, multi sensor video surveillance systems. In this thesis, the problem of tracking multiple targets with multiple cameras has been discussed. Cameras have been located so that they have overlapping fields of vision. A dynamic background-modeling algorithm is described for segmenting moving objects from the background, which is capable of adapting to dynamic scene changes and periodic motion, such as illumination change and swaying of trees. After segmentation of foreground scene, the objects to be tracked have been acquired by morphological operations and connected component analysis. For the purpose of tracking the moving objects, an active contour model (snakes) is one of the approaches, in addition to a Kalman tracker. As the main tracking algorithm, a rule based tracker has been developed first for a single camera, and then extended to multiple cameras. Results of used and proposed methods are given in detail.
|
3 |
Registration and Localization of Unknown Moving Objects in Markerless Monocular SLAMTroutman, Blake 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Simultaneous localization and mapping (SLAM) is a general device localization technique that uses realtime sensor measurements to develop a virtualization of the sensor's environment while also using this growing virtualization to determine the position and orientation of the sensor. This is useful for augmented reality (AR), in which a user looks through a head-mounted display (HMD) or viewfinder to see virtual components integrated into the real world. Visual SLAM (i.e., SLAM in which the sensor is an optical camera) is used in AR to determine the exact device/headset movement so that the virtual components can be accurately redrawn to the screen, matching the perceived motion of the world around the user as the user moves the device/headset. However, many potential AR applications may need access to more than device localization data in order to be useful; they may need to leverage environment data as well. Additionally, most SLAM solutions make the naive assumption that the environment surrounding the system is completely static (non-moving). Given these circumstances, it is clear that AR may benefit substantially from utilizing a SLAM solution that detects objects that move in the scene and ultimately provides localization data for each of these objects. This problem is known as the dynamic SLAM problem. Current attempts to address the dynamic SLAM problem often use machine learning to develop models that identify the parts of the camera image that belong to one of many classes of potentially-moving objects. The limitation with these approaches is that it is impractical to train models to identify every possible object that moves; additionally, some potentially-moving objects may be static in the scene, which these approaches often do not account for. Some other attempts to address the dynamic SLAM problem also localize the moving objects they detect, but these systems almost always rely on depth sensors or stereo camera configurations, which have significant limitations in real-world use cases. This dissertation presents a novel approach for registering and localizing unknown moving objects in the context of markerless, monocular, keyframe-based SLAM with no required prior information about object structure, appearance, or existence. This work also details a novel deep learning solution for determining SLAM map initialization suitability in structure-from-motion-based initialization approaches. This dissertation goes on to validate these approaches by implementing them in a markerless, monocular SLAM system called LUMO-SLAM, which is built from the ground up to demonstrate this approach to unknown moving object registration and localization. Results are collected for the LUMO-SLAM system, which address the accuracy of its camera localization estimates, the accuracy of its moving object localization estimates, and the consistency with which it registers moving objects in the scene. These results show that this solution to the dynamic SLAM problem, though it does not act as a practical solution for all use cases, has an ability to accurately register and localize unknown moving objects in such a way that makes it useful for some applications of AR without thwarting the system's ability to also perform accurate camera localization.
|
4 |
Traffic Scene Perception using Multiple Sensors for Vehicular Safety PurposesHosseinyalamdary , Saivash, Hosseinyalamdary 04 November 2016 (has links)
No description available.
|
5 |
Registration and Localization of Unknown Moving Objects in Markerless Monocular SLAMBlake Austin Troutman (15305962) 18 May 2023 (has links)
<p>Simultaneous localization and mapping (SLAM) is a general device localization technique that uses realtime sensor measurements to develop a virtualization of the sensor's environment while also using this growing virtualization to determine the position and orientation of the sensor. This is useful for augmented reality (AR), in which a user looks through a head-mounted display (HMD) or viewfinder to see virtual components integrated into the real world. Visual SLAM (i.e., SLAM in which the sensor is an optical camera) is used in AR to determine the exact device/headset movement so that the virtual components can be accurately redrawn to the screen, matching the perceived motion of the world around the user as the user moves the device/headset. However, many potential AR applications may need access to more than device localization data in order to be useful; they may need to leverage environment data as well. Additionally, most SLAM solutions make the naive assumption that the environment surrounding the system is completely static (non-moving). Given these circumstances, it is clear that AR may benefit substantially from utilizing a SLAM solution that detects objects that move in the scene and ultimately provides localization data for each of these objects. This problem is known as the dynamic SLAM problem. Current attempts to address the dynamic SLAM problem often use machine learning to develop models that identify the parts of the camera image that belong to one of many classes of potentially-moving objects. The limitation with these approaches is that it is impractical to train models to identify every possible object that moves; additionally, some potentially-moving objects may be static in the scene, which these approaches often do not account for. Some other attempts to address the dynamic SLAM problem also localize the moving objects they detect, but these systems almost always rely on depth sensors or stereo camera configurations, which have significant limitations in real-world use cases. This dissertation presents a novel approach for registering and localizing unknown moving objects in the context of markerless, monocular, keyframe-based SLAM with no required prior information about object structure, appearance, or existence. This work also details a novel deep learning solution for determining SLAM map initialization suitability in structure-from-motion-based initialization approaches. This dissertation goes on to validate these approaches by implementing them in a markerless, monocular SLAM system called LUMO-SLAM, which is built from the ground up to demonstrate this approach to unknown moving object registration and localization. Results are collected for the LUMO-SLAM system, which address the accuracy of its camera localization estimates, the accuracy of its moving object localization estimates, and the consistency with which it registers moving objects in the scene. These results show that this solution to the dynamic SLAM problem, though it does not act as a practical solution for all use cases, has an ability to accurately register and localize unknown moving objects in such a way that makes it useful for some applications of AR without thwarting the system's ability to also perform accurate camera localization.</p>
|
Page generated in 0.108 seconds