Return to search

Ethics in Social Autonomous Robots: Decision-Making, Transparency, and Trust

Autonomous decision-making machines – ranging from autonomous vehicles to chatbots – are already able to make decisions that have ethical consequences. If these machines are eventually deployed on a large scale, members of society will have to be able to trust the decisions that are made by these machines. For these machines to be trustworthy, their decisions must be overseen by socially accepted ethical principles; moreover, these principles and their role in machine decision-making must be transparent and explainable: it must be possible to explain why machine decisions are made and such explanations require that the mechanisms involved for making them are transparent. Furthermore, manufacturing companies have a corporate social responsibility to design such robots in ways that make them not only safe but also trustworthy. Members of society will not trust a robot that works in mysterious, ambiguous, or inexplicable ways, particularly if this robot is required to make decisions based on ethical principles.
The current literature on embedding ethics in robots is sparse. This thesis aims to partially fill this gap in order to help different stakeholders (including policy makers, the robot industry, robots designers, and the general public) to understand the many dimensions of machine- executable ethics. To this end, I provide a framework for understanding the relationships among different stakeholders who legislate, create, deploy, and use robots and their reasons for requiring transparency and explanations. This framework aims to provide an account of the relationships between the transparency of the decision-making process in ethical robots, explanations for their behaviour, and the individual and social trust that results.
This thesis also presents a model that decomposes the stages of ethical decision-making into their elementary components with a view to enabling stakeholders to allocate the responsibility for such choices. In addition, I propose a model for transparency which demonstrates the importance of and relationships between disclosure, transparency, and explanation which are needed for societies to accept and trust robots.
One of the important stakeholders of robotics is the general public and, in addition to providing an analytical framework with which to conceptualize ethical decision-making, this thesis also performs an analysis of opinions drawn from hundreds of written comments posted on public forums concerning the behaviour of socially autonomous robots. This analysis provides insights into the layperson’s responses to machines that make decisions and offers support for policy recommendations that should be considered by regulators in the future.
This thesis contributes to the area of ethics and governance of artificial intelligence.

Identiferoai:union.ndltd.org:uottawa.ca/oai:ruor.uottawa.ca:10393/37941
Date30 July 2018
CreatorsAlaieri, Fahad
ContributorsVellino, André
PublisherUniversité d'Ottawa / University of Ottawa
Source SetsUniversité d’Ottawa
LanguageEnglish
Detected LanguageEnglish
TypeThesis
Formatapplication/pdf

Page generated in 0.0012 seconds