A Study on Indoor Navigation by Augmented Reality and Down-looking Omni-vision Techniques Using Mobile Devices / 以俯視式環場電腦視覺及行動裝置作擴增實境式室內導覽

碩士 / 國立交通大學 / 多媒體工程研究所 / 100 / When people visit new indoor places or complicated indoor environments, there usually needs a navigation system to guide them to desired destinations. In this study, an indoor navigation system based on augmented reality (AR) and computer vision techniques by the use of a mobile device like an HTC Flyer or an iPad is proposed.
At first, an indoor vision infra-structure is set up by attaching fisheye cameras on the ceiling of the navigation environment. The user’s location and orientation are detected at a server-side system, and the analysis results are sent to the client-side system. Furthermore, the server-side system also sends the surrounding environment information and the navigation path to the client-side system, which runs on the user’s mobile device. The client-side system then displays the information in an AR way, which provides clear information for a user to conduct the navigation.
For human localization, a vision-based localization technique is proposed, which analyzes images captured from the fisheye cameras, and detects human activities in the environment. In order to transform coordinates of image points into the real-world space, a space-mapping technique is proposed. Furthermore, three techniques are integrated together to conduct human orientation detection effectively. The first is analysis of human motions in consecutive images. The second is utilization of the orientation sensor on the user’s mobile device. The last is localization of the color edge mark attached on the top of the mobile device using omni-images. These techniques are integrated together to provide a reliable human orientation detection system.
A path planning technique for use to generate a path from a spot to a selected destination via the use of an environment map is also proposed. The environment map is constructed from a floor plan drawing of the indoor environment. An obstacle avoidance map is created from the floor plan drawing, which is used to determine the avoidance direction when a path collides with an obstacle in the environment.
Finally, the navigation information is overlaid onto the image shown on the mobile device to provide an AR navigation interface. A method for estimation of the field-of-view of the camera on the mobile device is proposed. The field-of-view is used to construct a transformation matrix, by which real-world points can be transformed into the screen plane, so that the navigation information can be overlaid onto the corresponding real-world objects in the images to accomplish the AR function of the system.
Good experimental results are also presented to show the feasibility of the proposed methods for real applications. Precision measures and statistics showing the system’s effectiveness in producing precise data for accurate visiting target displays and environment navigations are also included.

Identiferoai:union.ndltd.org:TW/100NCTU5641040
Date January 2012
CreatorsHsieh, Meng-Yuan, 謝孟原
ContributorsTsai, Wen-Hsiang, 蔡文祥
Source SetsNational Digital Library of Theses and Dissertations in Taiwan
Languageen_US
Detected LanguageEnglish
Type學位論文 ; thesis
Format130

Page generated in 0.0093 seconds