Videos manipulated by machine learning have rapidly increased online in the past years. So called deepfakes can depict people who never participated in a video recording by transposing their faces onto others in it. This raises the concern of authenticity of media, which demand for higher performing detection methods in forensics. Introduction of AI detectors have been of interest, but is held back today by their lack of interpretability. The objective of this thesis was therefore to examine what the explainable AI method local interpretable model-agnostic explanations (LIME) could contribute to forensic investigations of deepfake video. An evaluation was conducted where 3 multimedia forensics evaluated the contribution of visual explanations of classifications when investigating deepfake video frames. The estimated contribution was not significant yet answers showed that LIME may be used to indicate areas to start examine. LIME was however not considered to provide sufficient proof to why a frame was classified as `fake', and would if introduced be used as one of several methods in the process. Issues were apparent regarding the interpretability of the explanations, as well as LIME's ability to indicate features of manipulation with superpixels.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:umu-184671 |
Date | January 2021 |
Creators | Fjellström, Lisa |
Publisher | Umeå universitet, Institutionen för datavetenskap |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Relation | UMNAD ; 1278 |
Page generated in 0.002 seconds