Spelling suggestions: "subject:"carla simulator"" "subject:"karla simulator""
1 |
Semantic Segmentation with Carla SimulatorMalec, Stanislaw January 2021 (has links)
Autonomous vehicles perform semantic segmentation to orient themselves, but training neural networks for semantic segmentation requires large amounts of labeled data. A hand-labeled real-life dataset requires considerable effort to create, so we instead turn to virtual simulators where the segmented labels are known to generate large datasets virtually for free. This work investigates how effective synthetic datasets are in driving scenarios by collecting a dataset from a simulator and testing it against a real-life hand-labeled dataset. We show that we can get a model up and running faster by mixing synthetic and real-life data than traditional dataset collection methods and achieve close to baseline performance.
|
2 |
Simulation Framework for Driving Data Collection and Object Detection Algorithms to Aid Autonomous Vehicle Emulation of Human Driving StylesJanuary 2020 (has links)
abstract: Autonomous Vehicles (AVs), or self-driving cars, are poised to have an enormous impact on the automotive industry and road transportation. While advances have been made towards the development of safe, competent autonomous vehicles, there has been inadequate attention to the control of autonomous vehicles in unanticipated situations, such as imminent crashes. Even if autonomous vehicles follow all safety measures, accidents are inevitable, and humans must trust autonomous vehicles to respond appropriately in such scenarios. It is not plausible to program autonomous vehicles with a set of rules to tackle every possible crash scenario. Instead, a possible approach is to align their decision-making capabilities with the moral priorities, values, and social motivations of trustworthy human drivers.Toward this end, this thesis contributes a simulation framework for collecting, analyzing, and replicating human driving behaviors in a variety of scenarios, including imminent crashes. Four driving scenarios in an urban traffic environment were designed in the CARLA driving simulator platform, in which simulated cars can either drive autonomously or be driven by a user via a steering wheel and pedals. These included three unavoidable crash scenarios, representing classic trolley-problem ethical dilemmas, and a scenario in which a car must be driven through a school zone, in order to examine driver prioritization of reaching a destination versus ensuring safety. Sample human driving data in CARLA was logged from the simulated car’s sensors, including the LiDAR, IMU and camera. In order to reproduce human driving behaviors in a simulated vehicle, it is necessary for the AV to be able to identify objects in the environment and evaluate the volume of their bounding boxes for prediction and planning. An object detection method was used that processes LiDAR point cloud data using the PointNet neural network architecture, analyzes RGB images via transfer learning using the Xception convolutional neural network architecture, and fuses the outputs of these two networks. This method was trained and tested on both the KITTI Vision Benchmark Suite dataset and a virtual dataset exclusively generated from CARLA. When applied to the KITTI dataset, the object detection method achieved an average classification accuracy of 96.72% and an average Intersection over Union (IoU) of 0.72, where the IoU metric compares predicted bounding boxes to those used for training. / Dissertation/Thesis / Masters Thesis Mechanical Engineering 2020
|
3 |
Optical Flow-based Artificial Potential Field Generation for Gradient Tracking Sliding Mode Control for Autonomous Vehicle NavigationCapito Ruiz, Linda J. 29 July 2019 (has links)
No description available.
|
Page generated in 0.0509 seconds