This study’s main contribution is a theoretical model for analysing parties’ use of deepfakes of their candidates in elections. Research stresses deepfakes’ capacity for disinformation as a challenge for democracies. In the Korean 2022 election, the two main parties used deepfakes of their candidates to communicate with voters. Given deepfakes’ well-studied negative implications, this seems perplexing. Amid this context, this study addresses the question: How can parties ethically use deepfakes of their candidates in elections? Merging AI ethics and deliberative democracy theory, three prerequisites are identified - disclosure of information, civil language, and giving justification - required for adherence to AI ethics and deliberative norms. This model was applied in a content analysis of the deepfake use in the Korean 2022 election. Results indicate strong adherence to the prerequisite of civil language, and partial adherence to disclosure of information and giving justification, and their corresponding AI ethical principles and deliberative norms. The findings suggest AI ethics and deliberative democracy theory are useful for studying the implications of parties' deepfake use. Starting from the premise of deepfakes as morally neutral, this study addresses a gap in the emerging field of deepfake research and highlights areas needing inquiry. If deepfakes become a legitimate communication tool for parties, it raises questions of the implications of such normalisation.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:umu-227478 |
Date | January 2024 |
Creators | Halvarsson, Mikaela |
Publisher | Umeå universitet, Statsvetenskapliga institutionen |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0269 seconds