碩士 / 國立臺灣大學 / 電信工程學研究所 / 107 / Here we propose a novel robot system which can navigate in indoor environments with spatial cues from the binaural sound. The system receives audio signals with a set of binaural microphones that has a pair of 3-D printed ear mockups for realistically acoustic effect. Afterwards, a series of neural models imitating human''s auditory pathway process the sound signal into neural signals of various stages, e.g., cochlea, superior olivary complex, inferior colliculus, and finally primary auditory cortex, extracting the ITD and ILD information of the sound. Meanwhile, the neural models for analog-front-end signal processing are simulated with various biologically plausible, realistic module or neural modeling tools, the IPEM toolbox and the Nengo simulator. The primary auditory cortex which is responsible for inferring the sound source azimuth is modeled with a supervised-learning deep neural network, with Keras and Tensorflow, to mimic the plasticity of the brain auditory cortex. Finally, a navigation planner generates goals based on the proposed intelligence levels and guide the robot base to explore the sound source. In the hardware aspect, the system is implemented on a Turtlebot3 base as the mobile robot and a GPU-accelerated PC for neural network simulations. The design, implementation details, and testing results are revealed, analyzed and discussed.
Identifer | oai:union.ndltd.org:TW/107NTU05435010 |
Date | January 2018 |
Creators | Chung-Yuan Chen, 陳重源 |
Contributors | Shyh-Kang Jeng, 鄭士康 |
Source Sets | National Digital Library of Theses and Dissertations in Taiwan |
Language | en_US |
Detected Language | English |
Type | 學位論文 ; thesis |
Format | 80 |
Page generated in 0.0096 seconds