abstract: Autonomous Vehicles (AVs), or self-driving cars, are poised to have an enormous impact on the automotive industry and road transportation. While advances have been made towards the development of safe, competent autonomous vehicles, there has been inadequate attention to the control of autonomous vehicles in unanticipated situations, such as imminent crashes. Even if autonomous vehicles follow all safety measures, accidents are inevitable, and humans must trust autonomous vehicles to respond appropriately in such scenarios. It is not plausible to program autonomous vehicles with a set of rules to tackle every possible crash scenario. Instead, a possible approach is to align their decision-making capabilities with the moral priorities, values, and social motivations of trustworthy human drivers.Toward this end, this thesis contributes a simulation framework for collecting, analyzing, and replicating human driving behaviors in a variety of scenarios, including imminent crashes. Four driving scenarios in an urban traffic environment were designed in the CARLA driving simulator platform, in which simulated cars can either drive autonomously or be driven by a user via a steering wheel and pedals. These included three unavoidable crash scenarios, representing classic trolley-problem ethical dilemmas, and a scenario in which a car must be driven through a school zone, in order to examine driver prioritization of reaching a destination versus ensuring safety. Sample human driving data in CARLA was logged from the simulated car’s sensors, including the LiDAR, IMU and camera. In order to reproduce human driving behaviors in a simulated vehicle, it is necessary for the AV to be able to identify objects in the environment and evaluate the volume of their bounding boxes for prediction and planning. An object detection method was used that processes LiDAR point cloud data using the PointNet neural network architecture, analyzes RGB images via transfer learning using the Xception convolutional neural network architecture, and fuses the outputs of these two networks. This method was trained and tested on both the KITTI Vision Benchmark Suite dataset and a virtual dataset exclusively generated from CARLA. When applied to the KITTI dataset, the object detection method achieved an average classification accuracy of 96.72% and an average Intersection over Union (IoU) of 0.72, where the IoU metric compares predicted bounding boxes to those used for training. / Dissertation/Thesis / Masters Thesis Mechanical Engineering 2020
Identifer | oai:union.ndltd.org:asu.edu/item:63033 |
Date | January 2020 |
Contributors | Govada, Yashaswy (Author), Berman, Spring (Advisor), Johnson, Kathryn (Committee member), Marvi, Hamidreza (Committee member), Arizona State University (Publisher) |
Source Sets | Arizona State University |
Language | English |
Detected Language | English |
Type | Masters Thesis |
Format | 80 pages |
Rights | http://rightsstatements.org/vocab/InC/1.0/ |
Page generated in 0.0017 seconds