Outside of the city environment, there are many unstructured and rough environments that are challenging in vehicle navigation tasks. In these environments, vehicle vibrations caused by rough terrain can be harmful for humans. In addition, a human operator can not work around the clock. A promising solution is to use artificial intelligence to replace human operators. I test this by using the artificial intelligence technique know as reinforcement learning, with the algorithm Proximal Policy Optimization, to perform some basic locomotion tasks in a simulated environment with a simple terrain vehicle. The terrain vehicle consists of two chassis, each having two wheels attached, connected to each other with an articulation joint that can rotate to turn the vehicle. I show that a trained model can learn to operate the terrain vehicle and complete basic tasks, such as finding and following a path while avoiding obstacles. I tested robustness by evaluating performance on sloped terrains with a model trained to operate on flat ground. The results from the tests with different slopes show that, for most environments, the trained model could handle slopes up to around 7.5-10 degrees without much issue, even though it had no way of detecting the slope. This tells us that the models can perform their tasks quite well even when disturbances are introduced, as long as these disturbances doesn't require them to significantly change their behaviors.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:umu-173273 |
Date | January 2020 |
Creators | Markgren, Jonas |
Publisher | Umeå universitet, Institutionen för fysik |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0022 seconds