In this thesis, findings in three areas of computer vision estimation are presented. First, an improvement to the Kanade-Lucas-Tomasi (KLT) feature tracking algorithm is presented in which gyroscope data is incorporated to compensate for camera rotation. This improved algorithm is then compared with the original algorithm and shown to be more effective at tracking features in the presence of large rotational motion. Next, a deep neural network approach to depth estimation is presented. Equations are derived relating camera and feature motion to depth. The information necessary for depth estimation is given as inputs to a deep neural network, which is trained to predict depth across an entire scene. This deep neural network approach is shown to be effective at predicting the general structure of a scene. Finally, a method of passively estimating the position and velocity of constant velocity targets using only bearing and time-to-collision measurements is presented. This method is paired with a path planner to avoid tracked targets. Results are given to show the effectiveness of the method at avoiding collision while maneuvering as little as possible.
Identifer | oai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-11229 |
Date | 07 December 2023 |
Creators | Adams, James J |
Publisher | BYU ScholarsArchive |
Source Sets | Brigham Young University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Theses and Dissertations |
Rights | https://lib.byu.edu/about/copyright/ |
Page generated in 0.0028 seconds