Return to search

A computational model of moral learning for autonomous vehicles

Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2018 / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 75-81). / We face a future of delegating many important decision making tasks to artificial intelligence (AI) systems as we anticipate widespread adoption of autonomous systems such as autonomous vehicles (AV). However, recent string of fatal accidents involving AV reminds us that delegating certain decisions making tasks have deep ethical complications. As a result, building ethical AI agent that makes decisions in line with human moral values has surfaced as a key challenge for Al researchers. While recent advances in deep learning in many domains of human intelligence suggests that deep learning models will also pave the way for moral learning and ethical decision making, training a deep learning model usually encompasses use of large quantities of human-labeled training data. In contrast to deep learning models, research in human cognition of moral learning theorizes that the human mind is capable of learning moral values from a few, limited observations of moral judgments of other individuals and apply those values to make ethical decisions in a new and unique moral dilemma. How can we leverage the insights that we have about human moral learning to design AI agents that can rapidly infer moral values of human it interacts with? In this work, I explore three cognitive mechanisms - abstraction, society-individual dynamics, and response time analysis - to demonstrate how these mechanisms contribute to rapid inference of moral values from limited number of observed data. I propose two Bayesian cognitive models to express these mechanisms using hierarchical Bayesian modeling framework and use large-scale ethical judgments from Moral Machine to empirically demonstrate the contributions of these mechanisms to rapid inference of individual preferences and biases in ethical decision making. / by Richard Kim. / S.M. / S.M. Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences

Identiferoai:union.ndltd.org:MIT/oai:dspace.mit.edu:1721.1/122897
Date January 2018
CreatorsKim, Richard
ContributorsIyad Rahwan., Program in Media Arts and Sciences (Massachusetts Institute of Technology), Program in Media Arts and Sciences (Massachusetts Institute of Technology)
PublisherMassachusetts Institute of Technology
Source SetsM.I.T. Theses and Dissertation
LanguageEnglish
Detected LanguageEnglish
TypeThesis
Format81 pages, application/pdf
RightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission., http://dspace.mit.edu/handle/1721.1/7582

Page generated in 0.1491 seconds