Auditory stream denotes the abstract effect a source creates in the mind of the listener. An auditory scene consists of many streams, which the listener uses to analyze and understand the environment. Computer analyses that attempt to mimic human analysis of a scene must first perform Audio Scene Segmentation (ASS). ASS find applications in surveillance, automatic speech recognition and human computer interfaces. Microphone arrays can be employed for extracting streams corresponding to spatially separated sources. However, when a source moves to a new location during a period of silence, such a system loses track of the source. This results in multiple spatially localized streams for the same source. This thesis proposes to identify local streams associated with the same source using auditory features extracted from the beamformed signal. ASS using the spatial cues is first performed. Then auditory features are extracted and segments are linked together based on similarity of the feature vector. An experiment was carried out with two simultaneous speakers. A classifier is used to classify the localized streams as belonging to one speaker or the other. The best performance was achieved when pitch appended with Gammatone Frequency Cepstral Coefficeints (GFCC) was used as the feature vector. An accuracy of 96.2% was achieved.
Identifer | oai:union.ndltd.org:uky.edu/oai:uknowledge.uky.edu:gradschool_theses-1626 |
Date | 01 January 2010 |
Creators | Unnikrishnan, Harikrishnan |
Publisher | UKnowledge |
Source Sets | University of Kentucky |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | University of Kentucky Master's Theses |
Page generated in 0.0025 seconds