Return to search

DEEP SKETCH-BASED CHARACTER MODELING USING MULTIPLE CONVOLUTIONAL NEURAL NETWORKS

<p>3D character modeling is a crucial process of asset creation in the entertainment industry, particularly for animation and games. A fully automated pipeline via sketch-based 3D modeling (SBM) is an emerging possibility, but development is stalled by unrefined outputs and a lack of character-centered tools. This thesis proposes an improved method for constructing 3D character models with minimal user input, using only two sketch inputs  i.e., a front and side unshaded sketch. The system implements a deep convolutional neural network (CNN), a type of deep learning algorithm extending from artificial intelligence (AI), to process the input sketch and generate multi-view depth, normal and confidence maps that offer more information about the 3D surface. These are then fused into a 3D point cloud, which is a type of object representation for 3D space. This point cloud is converted into a 3D mesh via an occupancy network, involving another CNN, for a more precise 3D representation. This reconstruction step contends with non-deep learning approaches such as  Poisson reconstruction. The proposed system is evaluated for character generation on standardized quantitative metrics (i.e., Chamfer Distance [CD], Earth Mover’s Distance [EMD], F-score and Intersection of Union [IoU]), and compared to the base framework trained on the same character sketch and model database. This implementation offers a  significant improvement in the accuracy of vertex positions for the reconstructed character models. </p>

  1. 10.25394/pgs.21675104.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/21675104
Date07 December 2022
CreatorsAleena Kyenat Malik Aslam (14216159)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/thesis/DEEP_SKETCH-BASED_CHARACTER_MODELING_USING_MULTIPLE_CONVOLUTIONAL_NEURAL_NETWORKS/21675104

Page generated in 0.0021 seconds