In order to reduce bicycle-vehicle collisions, we design and implement a cost effectiveembedded system to warn cyclists of approaching vehicles. The system uses an Odroid C2 singleboard computer (SBC) to do vehicle and lane detection in real time using only vision. The system warns cyclist are warned of approaching cars using both a smartphone app and an LED indicator. Due to the limited performance of the Odroid C2 and other low power and low cost SBCs,we found that existing detection algorithms run either too slowly or do not have sufficient accuracy to be practical. Our solution to these limitations is to create a custom fully convolutional network(FCN) which is small enough to run at real time speeds on the Odroid C2 but robust enough tohave decent accuracy. We show that this FCN runs significantly faster than Tiny YOLOv3 andMobileNetv2 while getting similar accuracy when all are trained on a limited dataset. Since no dataset exists that separates the fronts of vehicles from other poses and is in the context of city and country roads, we create our own. Creating a dataset to train any detector hastraditionally been time consuming. We present and implement a way to efficiently do this usingminimal hand annotation by generating semi-synthetic images by cropping relatively few positive images into many background images. This creates a wider background class variance than wouldotherwise be possible.
Identifer | oai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-8391 |
Date | 01 April 2019 |
Creators | Heydorn, Matthew Ryan |
Publisher | BYU ScholarsArchive |
Source Sets | Brigham Young University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Theses and Dissertations |
Page generated in 0.0021 seconds