This thesis introduces a 3D body tracking system based on neutral networks and 3D geometry, which can robustly estimate body poses and accurate body joints. This system takes RGB-D data as input. Body poses and joints are firstly extracted from color image using deep learning approach. The estimated joints and skeletons are further translated to 3D space by using camera calibration information. This system is running at the rate of 3 4 frames per second. It can be used to any RGB-D sensors, such as Kinect, Intel RealSense [14] or any customized system with color depth calibrated. Comparing to the sate-of-art 3D body tracking system, this system is more robust, and can get much more accurate joints locations, which will benefits projects require precise joints, such as virtual try-on, body measure, real-time avatar driven.
Identifer | oai:union.ndltd.org:uky.edu/oai:uknowledge.uky.edu:cs_etds-1064 |
Date | 01 January 2017 |
Creators | Xu, Qingguo |
Publisher | UKnowledge |
Source Sets | University of Kentucky |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Theses and Dissertations--Computer Science |
Page generated in 0.002 seconds