• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 6
  • 6
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

e-DTS 2.0: A Next-Generation of a Distributed Tracking System

Rybarczyk, Ryan Thomas 20 March 2012 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / A key component in tracking is identifying relevant data and combining the data in an effort to provide an accurate estimate of both the location and the orientation of an object marker as it moves through an environment. This thesis proposes an enhancement to an existing tracking system, the enhanced distributed tracking system (e-DTS), in the form of the e-DTS 2.0 and provides an empirical analysis of these enhancements. The thesis also provides suggestions on future enhancements and improvements. When a Camera identifies an object within its frame of view, it communicates with a JINI-based service in an effort to expose this information to any client who wishes to consume it. This aforementioned communication utilizes the JINI Multicast Lookup Protocol to provide the means for a dynamic discovery of any sensors as they are added or removed from the environment during the tracking process. The client can then retrieve this information from the service and perform a fusion technique in an effort to provide an estimation of the marker's current location with respect to a given coordinate system. The coordinate system handoff and transformation is a key component of the e-DTS 2.0 tracking process as it improves the agility of the system.
2

Model Based Coding : Initialization, Parameter Extraction and Evaluation

Yao, Zhengrong January 2005 (has links)
<p>This thesis covers topics relevant to model-based coding. Model-based coding is a promising very low bit rate video coding technique. The idea behind this technique is to parameterize a talking head and to extract and transmit the parameters describing facial movements. At the receiver, the parameters are used to reconstruct the talking head. Since only high-level animation parameters are transmitted, very high compression can be achieved with this coding scheme. This thesis covers the following three key problems.</p><p>Although it is a fundamental problem, the initialization problem, has been neglected some extent in the literature. In this thesis, we pay particular attention to the study of this problem. We propose a pseudo-automatic initialization scheme: an Analysis-by-Synthesis scheme based on Simulated Annealing. It has been proved to be an efficient scheme.</p><p>Owing to technical advance today and the newly emerged MPEG-4 standard, new schemes of performing texture mapping and motion estimation are suggested which use sample based direct texture mapping; the feasibility of using active motion estimation is explored which proves to be able to give more than 10 times tracking resolution. Based on the matured face detection technique, Dynamic Programming is introduced to face detection module and work for face tracking.</p><p>Another important problem addressed in this thesis is how to evaluate the face tracking techniques. We studied the evaluation problems by examining the commonly used method, which employs a physical magnetic sensor to provide "ground truth". In this thesis we point out that it is quite misleading to use such a method.</p>
3

Model Based Coding : Initialization, Parameter Extraction and Evaluation

Yao, Zhengrong January 2005 (has links)
This thesis covers topics relevant to model-based coding. Model-based coding is a promising very low bit rate video coding technique. The idea behind this technique is to parameterize a talking head and to extract and transmit the parameters describing facial movements. At the receiver, the parameters are used to reconstruct the talking head. Since only high-level animation parameters are transmitted, very high compression can be achieved with this coding scheme. This thesis covers the following three key problems. Although it is a fundamental problem, the initialization problem, has been neglected some extent in the literature. In this thesis, we pay particular attention to the study of this problem. We propose a pseudo-automatic initialization scheme: an Analysis-by-Synthesis scheme based on Simulated Annealing. It has been proved to be an efficient scheme. Owing to technical advance today and the newly emerged MPEG-4 standard, new schemes of performing texture mapping and motion estimation are suggested which use sample based direct texture mapping; the feasibility of using active motion estimation is explored which proves to be able to give more than 10 times tracking resolution. Based on the matured face detection technique, Dynamic Programming is introduced to face detection module and work for face tracking. Another important problem addressed in this thesis is how to evaluate the face tracking techniques. We studied the evaluation problems by examining the commonly used method, which employs a physical magnetic sensor to provide "ground truth". In this thesis we point out that it is quite misleading to use such a method.
4

Three-Dimensional Hand Tracking and Surface-Geometry Measurement for a Robot-Vision System

Liu, Chris Yu-Liang 17 January 2009 (has links)
Tracking of human motion and object identification and recognition are important in many applications including motion capture for human-machine interaction systems. This research is part of a global project to enable a service robot to recognize new objects and perform different object-related tasks based on task guidance and demonstration provided by a general user. This research consists of the calibration and testing of two vision systems which are part of a robot-vision system. First, real-time tracking of a human hand is achieved using images acquired from three calibrated synchronized cameras. Hand pose is determined from the positions of physical markers and input to the robot system in real-time. Second, a multi-line laser camera range sensor is designed, calibrated, and mounted on a robot end-effector to provide three-dimensional (3D) geometry information about objects in the robot environment. The laser-camera sensor includes two cameras to provide stereo vision. For the 3D hand tracking, a novel score-based hand tracking scheme is presented employing dynamic multi-threshold marker detection, a stereo camera-pair utilization scheme, marker matching and labeling using epipolar geometry and hand pose axis analysis, to enable real-time hand tracking under occlusion and non-uniform lighting environments. For surface-geometry measurement using the multi-line laser range sensor, two different approaches are analyzed for two-dimensional (2D) to 3D coordinate mapping, using Bezier surface fitting and neural networks, respectively. The neural-network approach was found to be a more viable approach for surface-geometry measurement worth future exploration for its lower magnitude of 3D reconstruction error and consistency over different regions of the object space.
5

Three-Dimensional Hand Tracking and Surface-Geometry Measurement for a Robot-Vision System

Liu, Chris Yu-Liang 17 January 2009 (has links)
Tracking of human motion and object identification and recognition are important in many applications including motion capture for human-machine interaction systems. This research is part of a global project to enable a service robot to recognize new objects and perform different object-related tasks based on task guidance and demonstration provided by a general user. This research consists of the calibration and testing of two vision systems which are part of a robot-vision system. First, real-time tracking of a human hand is achieved using images acquired from three calibrated synchronized cameras. Hand pose is determined from the positions of physical markers and input to the robot system in real-time. Second, a multi-line laser camera range sensor is designed, calibrated, and mounted on a robot end-effector to provide three-dimensional (3D) geometry information about objects in the robot environment. The laser-camera sensor includes two cameras to provide stereo vision. For the 3D hand tracking, a novel score-based hand tracking scheme is presented employing dynamic multi-threshold marker detection, a stereo camera-pair utilization scheme, marker matching and labeling using epipolar geometry and hand pose axis analysis, to enable real-time hand tracking under occlusion and non-uniform lighting environments. For surface-geometry measurement using the multi-line laser range sensor, two different approaches are analyzed for two-dimensional (2D) to 3D coordinate mapping, using Bezier surface fitting and neural networks, respectively. The neural-network approach was found to be a more viable approach for surface-geometry measurement worth future exploration for its lower magnitude of 3D reconstruction error and consistency over different regions of the object space.
6

Nonlinear Estimation for Vision-Based Air-to-Air Tracking

Oh, Seung-Min 14 November 2007 (has links)
Unmanned aerial vehicles (UAV's) have been the focus of significant research interest in both military and commercial areas since they have a variety of practical applications including reconnaissance, surveillance, target acquisition, search and rescue, patrolling, real-time monitoring, and mapping, to name a few. To increase the autonomy and the capability of these UAV's and thus to reduce the workload of human operators, typical autonomous UAV's are usually equipped with both a navigation system and a tracking system. The navigation system provides high-rate ownship states (typically ownship inertial position, inertial velocity, and attitude) that are directly used in the autopilot system, and the tracking system provides low-rate target tracking states (typically target relative position and velocity with respect to the ownship). Target states in the global frame can be obtained by adding the ownship states and the target tracking states. The data estimated from this combination of the navigation system and the tracking system provide key information for the design of most UAV guidance laws, control command generation, trajectory generation, and path planning. As a baseline system that estimates ownship states, an integrated navigation system is designed by using an extended Kalman filter (EKF) with sequential measurement updates. In order to effectively fuse various sources of aiding sensor information, the sequential measurement update algorithm is introduced in the design of the integrated navigation system with the objective of being implemented in low-cost autonomous UAV's. Since estimated state accuracy using a low-cost, MEMS-based IMU degrades with time, several absolute (low update rate but bounded error in time) sensors, including the GPS receiver, the magnetometer, and the altimeter, can compensate for time-degrading errors. In this work, the sequential measurement update algorithm in smaller vectors and matrices is capable of providing a convenient framework for fusing the many sources of information in the design of integrated navigation systems. In this framework, several aiding sensor measurements with different size and update rates are easily fused with basic high-rate IMU processing. In order to provide a new mechanism that estimates ownship states, a new nonlinear filtering framework, called the unscented Kalman filter (UKF) with sequential measurement updates, is developed and applied to the design of a new integrated navigation system. The UKF is known to be more accurate and convenient to use with a slightly higher computational cost. This filter provides at least second-order accuracy by approximating Gaussian distributions rather than arbitrary nonlinear functions. This is compared to the first-order accuracy of the well-known EKF based on linearization. In addition, the step of computing the often troublesome Jacobian matrices, always required in the design of an integrated navigation system using the EKF, is eliminated. Furthermore, by employing the concept of sequential measurement updates in the UKF, we can add the advantages of sequential measurement update strategy such as easy compensation of sensor latency, easy fusion of multi-sensors, and easy addition and subtraction of new sensors while maintaining those of the standard UKF such as accurate estimation and removal of Jacobian matrices. Simulation results show better performance of the UKF-based navigation system than the EKF-based system since the UKF-based system is more robust to initial accelerometer and rate gyro biases and more accurate in terms of reducing transient peaks and steady-state errors in ownship state estimation. In order to estimate target tracking states or target kinematics, a new vision-based tracking system is designed by using a UKF in the scenario of three-dimensional air-to-air tracking. The tracking system can estimate not only the target tracking states but also several target characteristics including target size and acceleration. By introducing the UKF, the new vision-based tracking system presents good estimation performance by overcoming the highly nonlinear characteristics of the problem with a relatively simplified formulation. Moreover, the computational step of messy Jacobian matrices involved in the target acceleration dynamics and angular measurements is removed. A new particle filtering framework, called an extended marginalized particle filter (EMPF), is developed and applied to the design of a new vision-based tracking system. In this work, only three position components with vision measurements are solved in particle filtering part by applying Rao-Blackwellization or marginalization approach, and the other dynamics, including the target nonlinear acceleration model, with Gaussian noise are effectively handled by using the UKF. Since vision information can be better represented by probabilistic measurements and the EMPF framework can be easily extended to handle this type of measurements, better performance in estimating target tracking states will be achieved by directly incorporating non-Gaussian, probabilistic vision information as the measurement inputs to the vision-based tracking system in the EMPF framework.

Page generated in 0.0818 seconds