Return to search

Evaluation of Error Metrics Used for Quality Assessment of Simplified Meshes

Level of Detail (LOD) is an important area in game development and it is a term used to describe the complexity of 3D models. Complex 3D models that are rendered from a far distance can often be simplified, to minimise rendering costs. The visual appearance of a simplified model should be as close to the original model as possible. This process requires metrics that can produce a similarity score or a distance value between meshes of different quality.  In this report, four different metrics are evaluated on a dataset with models of different qualities. The metrics are the MSDM2, Chamfer Distance, Hausdorff Distance and Simplygon's internal distance called Maximum Deviation. The dataset is already annotated with subjective scores from an earlier experiment, and the metrics are evaluated using the Spearman and the Pearson Correlations between metric values and subjective scores. The metrics are evaluated on the whole model set, and on different categories of models. The correlation scores are calculated using three different regression techniques. These are a per-dataset regression, a scaled per-dataset regression, and an averaged per-model regression. In addition to this, the metrics are also evaluated on the same dataset but where the LOD:s are created using a different simplifying algorithm, Simplygon's own reducer.  The results show that MSDM2 is the best metric in correlation with subjective scores when using a per-dataset regression. It is also noticed that the other metrics are all quite similar. The difference between the MSDM2 metric and the other metrics is also much larger on categories like "Hard surface"- and "Complex" models. When using the less common regression techniques, MSDM2 has the worst correlation, and Chamfer Distance correlates the best.  When comparing the results from the two datasets, Simplygon's own reducer seem to have a greater correlation with the MSDM2 metric. There was no clear difference in scores for the other metrics.  The end result is that one metric is not always the best metric. The type of model, and the simplification algorithm used to create the LOD, can both affect the result. The evaluation technique also changes the result.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:liu-186239
Date January 2022
CreatorsUdd, Dennis
PublisherLinköpings universitet, Institutionen för systemteknik
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0024 seconds