Video acquired using a camera mounted on a mini Unmanned Air Vehicle (mUAV) may be very helpful in Wilderness Search and Rescue and many other applications but is commonly plagued with limited spatial and temporal field of views, distractive jittery motions, disorienting rotations, and noisy and distorted images. These problems collectively make it very difficult for human viewers to identify objects of interest as well as infer correct orientations throughout the video. In order to expand the temporal and spatial field of view, stabilize, and better orient users of noisy and distorted mUAV video, a method is proposed of estimating in software and in real time the relative motions of each frame to the next by tracking a small subset of features within each frame to the next. Using these relative motions, a local Euclidean mosaic of the video can be created and a curve can be fit to the video's accumulative motion path to stabilize the presentations of both the video and the local Euclidean mosaic. The increase in users' abilities to perform common search-and-rescue tasks of identifying objects of interest throughout the stabilized and locally mosaiced mUAV video is then evaluated. Finally, a discussion of remaining limitations is presented along with some possibilities for future work.
Identifer | oai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-2055 |
Date | 01 February 2007 |
Creators | Gerhardt, Damon Dyck |
Publisher | BYU ScholarsArchive |
Source Sets | Brigham Young University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Theses and Dissertations |
Rights | http://lib.byu.edu/about/copyright/ |
Page generated in 0.0027 seconds