Return to search

Creating a self-driving terrain vehicle in a simulated environment

Outside of the city environment, there are many unstructured and rough environments that are challenging in vehicle navigation tasks. In these environments, vehicle vibrations caused by rough terrain can be harmful for humans. In addition, a human operator can not work around the clock. A promising solution is to use artificial intelligence to replace human operators. I test this by using the artificial intelligence technique know as reinforcement learning, with the algorithm Proximal Policy Optimization, to perform some basic locomotion tasks in a simulated environment with a simple terrain vehicle. The terrain vehicle consists of two chassis, each having two wheels attached, connected to each other with an articulation joint that can rotate to turn the vehicle. I show that a trained model can learn to operate the terrain vehicle and complete basic tasks, such as finding and following a path while avoiding obstacles. I tested robustness by evaluating performance on sloped terrains with a model trained to operate on flat ground. The results from the tests with different slopes show that, for most environments, the trained model could handle slopes up to around 7.5-10 degrees without much issue, even though it had no way of detecting the slope. This tells us that the models can perform their tasks quite well even when disturbances are introduced, as long as these disturbances doesn't require them to significantly change their behaviors.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:umu-173273
Date January 2020
CreatorsMarkgren, Jonas
PublisherUmeå universitet, Institutionen för fysik
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0018 seconds