The development and application of artificial intelligence (AI) for military purposes are increasing rapidly in many parts of the world. Military powers are driving programs aimed at the advantages that AI can generate. Simultaneously, ethical questions arise concerning autonomous military systems. This study aims to provide clarity on how future Swedish officers with different backgrounds within the profession relate to the ethical issues that accompany the use of autonomous weapon systems. In this study, the respondents are presented with two fictitious scenarios based on the principles of distinction and proportionality, describing ethically problematic attacks that affect civilians. In each scenario, respondents are asked to take a stance on attacks carried out with different degrees of autonomy. The results of the study show that future officers consider the ethical defensibility of an attack to decrease as the degree of autonomy in the weapon system used increases.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:fhs-11566 |
Date | January 2023 |
Creators | Axelsson, Marcus |
Publisher | Försvarshögskolan |
Source Sets | DiVA Archive at Upsalla University |
Language | Swedish |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0027 seconds