In pop culture, artificial intelligences (AI) are frequently portrayed as worthy of moral personhood, and failing to treat these entities as such is often treated as analogous to racism. The implicit condition for attributing moral personhood to an AI is usually passing some form of the "Turing Test", wherein an entity passes if it could be mistaken for a human. I argue that this is unfounded under any moral theory that uses the capacity for desire as the criteria for moral standing. Though the action-based theory of desire ensures that passing a rigourous enough version of the Turing Test would be sufficient for moral personhood, that theory has unacceptable results when used in moral theory. If a desire-based moral theory is to be made defensible, it must use a phenomenological account of desire, which would make the Turing Test fail to track the relevant property. / October 2015
Identifer | oai:union.ndltd.org:MANITOBA/oai:mspace.lib.umanitoba.ca:1993/30702 |
Date | 01 September 2015 |
Creators | Novelli, Nicholas |
Contributors | Martens, Rhonda (Philosophy), Shaver, Robert (Philosophy) Hannan, Sarah (Political Studies) |
Source Sets | University of Manitoba Canada |
Detected Language | English |
Page generated in 0.0111 seconds