• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Artificial Intelligence in Lethal Automated Weapon Systems - What's the Problem? : Analysing the framing of LAWS in the EU ethics guidelines for trustworthy AI, the European Parliament Resolution on autonomous weapon systems and the CCW GGE guiding principles.

Beltran, Nicole January 2020 (has links)
Lethal automated weapon systems (LAWS) are developed and deployed by a growing number of state and non-state actors, although no international legally binding framework exists as of yet. As a first attempt to regulate LAWS the UN appointed a group of governmental experts (GGE) to create the guiding principles on the issue of LAWS AI. A few years later the EU appointed an expert group to create the Ethics guideline for trustworthy and the European Parliament passed a resolution on the issue of LAWS.  This thesis attempts to make the underlying norms and discourses that have shaped these guiding principles and guidelines visible. By scrutinizing the documents through the ‘What’s the problem presented to be’-approach, the discursive practices that enables the framing is illuminated. The obscured problems not spoken of in the EU and UN documents are emphasised, suggesting that both documents oversimplifies  and downplays the danger of LAWS, leaving issues such as gender repercussions, human dignity and the dangers of the sophisticated weapons system itself largely unproblematised and hidden behind their suggested dichotomised and anthropocentric solutions, which largely results in a simple “add human and stir”-kind of solution. The underlying cause of this tendency seems to stem from a general unwillingness of states to regulate as LAWS are quickly becoming a matter of have- and have nots and may potentially change warfare as we know it. A case can also be made as to AI’s ‘Hollywood-problem’ as influencing the framing of LAWS, where the dystopian terminator-like depiction in popular culture can be seen reflected in international policy papers and statements.

Page generated in 0.0856 seconds