For many undersea research application scenarios, instruments need to be deployed for more than one month which is the basic time interval for many phenomena. With limited power supply and memory, management strategies are crucial for the success of data collection. For acoustic recording of undersea activities, in general,either preprogrammed duty cycle is configured to log partial time series,or spectrogram of signal is derived and stored,to utilize the available memory storage efficiently.To overcome this limitation, we come up with an algorithm to classify different and store only the sound data of interest.
Features like characteristic frequencies, large amplitude of selected frequencies or intensity threshold are used to identify or classify different patterns. On main limitation for this type of approaches is that the algorithm is generally range-dependent, as a result, also sound-level-dependent. This type of algorithms will be less robust to the change of the environment.One the other hand, one interesting observation is that when human beings look at the spectrogram, they will immediately tell the difference between two patterns. Even though no knowledge about the nature of the source, human beings still can discern the tiny dissimilarity and group them accordingly. This suggests that the recognition and classification can be done in spectrogram as a recognition problem. In this work, we propose to modify Principal Component Analysis by generating feature points from moment invariant and sound Level variance, to classify sounds of interest in the ocean. Among all different sound sources in the ocean, we focus on three categories of our interest, i.e., rain, ship and whale and dolphin.
The sound data were recorded with the Passive Acoustic Listener developed by Nystuen, Applied Physics Lab, University of Washington. Among all the data, we manually identify twenty frames for each cases, and use them as the base training set. Feed several unknown clips for classification experiments, we suggest that both point-based feature extraction are effective ways to describe whistle vocalizations and believe that this algorithm would be useful for extracting features from noisy recordings of the callings of a wide variety of species.
Identifer | oai:union.ndltd.org:NSYSU/oai:NSYSU:etd-0723107-120909 |
Date | 23 July 2007 |
Creators | Wang, Chiao-mei |
Contributors | Chau-Chang Wang, Barry Ma, Hsin-Hung Chen |
Publisher | NSYSU |
Source Sets | NSYSU Electronic Thesis and Dissertation Archive |
Language | Cholon |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0723107-120909 |
Rights | unrestricted, Copyright information available at source archive |
Page generated in 0.0126 seconds