Technological advancements made in recent decades in the fields of artificial intelligence (AI) and machine learning (ML) has lead to further automation of tasks previously performed by humans. Manually reviewing and assessing content uploaded to social media and marketplace platforms is one of said tasks that is both tedious and expensive to perform, and could possibly be automated through ML based systems. When introducing ML model predictions to a human decision making process, interpretability and explainability of models has been proven to be important factors for humans to trust in individual sample predictions. This thesis project aims to explore the performance of interpretable ML models used together with humans in an ad review process for a rental marketplace platform. Utilizing the XGBoost framework and SHAP for interpretable ML, a system was built with the ability to score an individual ad and explain the prediction with human readable sentences based on feature importance. The model reached an ROC AUC score of 0.90 and an Average Precision score of 0.64 on a held out test set. An end user survey was conducted which indicated some trust in the model and an appreciation for the local prediction explanations, but low general impact and helpfulness. While most related work focus on model performance, this thesis contributes with a smaller model usability study which can provide grounds for utilizing interpretable ML software in any manual decision making process.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:liu-187903 |
Date | January 2022 |
Creators | Dahlgren, Eric |
Publisher | Linköpings universitet, Interaktiva och kognitiva system |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0022 seconds