Spelling suggestions: "subject:"automatic detection anda classification"" "subject:"automatic detection ando classification""
1 |
Using an Aural Classifier to Discriminate Cetacean VocalizationsBinder, Carolyn 26 March 2012 (has links)
To positively identify marine mammals using passive acoustics, large volumes of data are often collected that need to be processed by a trained analyst. To reduce acoustic analyst workload, an automatic detector can be implemented that produces many detections, which feed into an automatic classifier to significantly reduce the number of false detections. This requires the development of a robust classifier capable of performing inter-species classification as well as discriminating cetacean vocalizations from anthropogenic noise sources. A prototype aural classifier was developed at Defence Research and Development Canada that uses perceptual signal features which model the features employed by the human auditory system. The dataset included anthropogenic passive transients and vocalizations from five cetacean species: bowhead, humpback, North Atlantic right, minke and sperm whales. Discriminant analysis was implemented to replace principal component analysis; the projection obtained using discriminant analysis improved between-species discrimination during multiclass cetacean classification, compared to principal component analysis. The aural classifier was able to successfully identify the vocalizing cetacean species. The area under the receiver operating characteristic curve (AUC) is used to quantify the two-class classifier performance and the M-measure is used when there are three or more classes; the maximum possible value of both AUC and M is 1.00 – which is indicative of an ideal classifier model. Accurate classification results were obtained for multiclass classification of all species in the dataset (M = 0.99), and the challenging bowhead/ humpback (AUC = 0.97) and sperm whale click/anthropogenic transient (AUC = 1.00) two-class classifications.
|
2 |
Monitoring fish using passive acousticsMouy, Xavier 31 January 2022 (has links)
Some fish produce sounds for a variety of reasons, such as to find mates, defend their territory, or maintain cohesion within their group. These sounds could be used to non-intrusively detect the presence of fish and potentially to estimate their number (or density) over large areas and long time periods. However, many fish sounds have not yet been associated to specific species, which limits the usefulness of this approach. While
recording fish sounds in tanks is reasonably straightforward, it presents several
problems: many fish do not produce sounds in captivity or their behavior and sound production is altered significantly, and the complex acoustic propagation conditions in tanks often leads to distorted measurements. The work presented in this thesis aims to address these issues by providing methodologies to record, detect, and identify species-specific fish sounds in the wild. A set of hardware and software solutions are developed to simultaneously record fish sounds, acoustically localize the fish in three-dimensions, and record video to identify the fish and observe their behavior. Three platforms have been developed and tested in the field. The first platform, referred to as the large array, is composed of six hydrophones connected to an AMAR acoustic recorder and two open-source autonomous video cameras (FishCams) that were developed during this thesis. These instruments are secured to a PVC frame of dimension 2 m x 2 m x 3 m that can be transported and assembled in the field. The hydrophone configuration for this array was
defined using a simulated annealing optimization approach that minimized localization uncertainties. This array provides the largest field of view and most accurate acoustic localization, and is well suited to long-term deployments (weeks). The second platform, referred to as the mini array, uses a single FishCam and four hydrophones connected to a SoundTrap acoustic recorder on a one cubic meter PVC frame; this array can be deployed more easily in constrained locations or on rough/uneven seabeds. The third platform, referred to as the mobile array, consists of four hydrophones connected to a SoundTrap recorder and mounted on a tethered Trident underwater drone with built-in video, allowing remote control and real-time positioning in response to observed fish presence, rather than long-term deployments as for the large and mini arrays. For each array, acoustic localization is performed by measuring time-difference of arrivals between hydrophones
and estimating the sound-source location using linearized (for the large array) or non-linear (for the mini and mobile arrays) inversion. Fish sounds are automatically detected and localized in three dimensions, and sounds localized within the field of view of the camera(s) are assigned to a fish species by manually reviewing the video recordings. The three platforms were deployed at four locations off the East coast of Vancouver Island, British Columbia, Canada, and allowed the identification of sounds from quillback rockfish (Sebastes maliger), copper rockfish (Sebastes caurinus), and lingcod (Ophiodon elongatus), species that had not been documented previously to produce sounds. While each platform developed during this thesis has its own set of advantages and limitations, using them in coordination helps identify fish sounds over different habitats and with various budget and logistical constraints. In an effort to make passive acoustics a more viable way to monitor fish in the wild, this thesis also investigates the use of automatic detection and classification algorithms to efficiently find fish sounds in large passive acoustic datasets. The proposed approach detects acoustic transients using a measure of spectrogram variance and classifies them as “noise” or “fish sounds” using a binary classifier. Five different classification algorithms were trained and evaluated on a dataset of more than 96,000 manually annotated examples of fish sounds and noise from five locations off Vancouver Island. The classification algorithm that performed best (random forest) has an Fscore of 0.84 (Precision = 0.82,Recall = 0.86) on the test dataset. The
analysis of 2.5 months of acoustic data collected in a rockfish conservation area off Vancouver Island shows that the proposed detector can be used to efficiently explore large datasets, formulate hypotheses, and help answer practical conservation questions. / Graduate
|
Page generated in 0.158 seconds