The primary goal of this work is to answer this question: if Artificial Intelligences (AI) are proper subjects of moral consideration, then should we develop such AI – that is, AI worthy of moral consideration of its own accord? To answer the above question, it is necessary to provide a systematic overview of whether AI are, or could be, subjects of moral consideration. By combining P. Wang’s definition of AI with AK.M. Andersson’s “The Relevant Similarity Theory”, I aim to identify conditions under which an AI could be demarcated as a proper subject of moral consideration. As a comparison, I also combine Wang’s definition with M.C. Nussbaum’s “Capability Theory”. The proposed theories have two strengths in common – namely that they each are good and contemporary examples of two influential families of views in ethics, and that they, together, represent a fairly wide spectrum of ethical theory. Using the insights gained I first develop an argument showing that beings classifiable as AI under Wang’s definition of intelligence would be correctly demarcated as proper subjects of moral consideration, regardless of preference of the two moral theories. I then develop an argument answering my primary question as such: if AI are proper subjects of moral consideration, then we should not develop AI further. / <p>HT 2021</p>
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-466098 |
Date | January 2022 |
Creators | Johansson, Einar |
Publisher | Uppsala universitet, Avdelningen för praktisk filosofi |
Source Sets | DiVA Archive at Upsalla University |
Language | Swedish |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0028 seconds