• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Association of Sound to Motion in Video Using Perceptual Organization

Ravulapalli, Sunil Babu 29 March 2006 (has links)
Technological developments and innovations of the first forty years of the digital era have primarily addressed either the audio or the visual senses. Consequently, designers have primarily focused on the audio or the visual aspects of design. In the perspective of video surveillance, the data under consideration has always been visual. However, in light of the new behavioral and physiological studies which established a proof of cross modality in human perception i.e. humans do not process audio and visual stimulus separately, but percieve a scene based on all stimulus available, similar cues are being used to develop a surveillance system which uses both audio and visual data available. Human beings can easily associate a particular sound to an object in the surrounding. Drawing from such studies, we demonstrate a technique by which we can isolate concurrent audio and video events and associate them based on perceptual grouping principles. Associating sound to an object can form apart of larger surveillance system by producing a better description of objects. We represent audio in the pitch-time domain and use image processing algorithms such as line detection to isolate significant events. These events and are then grouped based on gestalt principles of proximity and similarity which operates in audio. Once auditory events are isolated we can extract their periodicity. In video, we can extract objects by using simple background subtraction. We extract motion and shape periodicities of all the objects by tracking their position or the number of pixels in each frame. By comparing all the periodicities in audio and video using a simple index we can easily associate audio to video. We show results on five scenariosin outdoor settings with different kinds of human activity such as running, walking and other moving objects such as balls and cars.
2

Regularity extraction from non-adjacent sounds

Bendixen, Alexandra, Schröger, Erich, Ritter, Walter, Winkler, István 29 July 2022 (has links)
The regular behavior of sound sources helps us to make sense of the auditory environment. Regular patterns may, for instance, convey information on the identity of a sound source (such as the acoustic signature of a train moving on the rails). Yet typically, this signature overlaps in time with signals emitted from other sound sources. It is generally assumed that auditory regularity extraction cannot operate upon this mixture of signals because it only finds regularities between adjacent sounds. In this view, the auditory environment would be grouped into separate entities by means of readily available acoustic cues such as separation in frequency and location. Regularity extraction processes would then operate upon the resulting groups. Our new experimental evidence challenges this view. We presented two interleaved sound sequences which overlapped in frequency range and shared all acoustic parameters. The sequences only differed in their underlying regular patterns. We inserted deviants into one of the sequences to probe whether the regularity was extracted. In the first experiment, we found that these deviants elicited the mismatch negativity (MMN) component. Thus the auditory system was able to find the regularity between the non-adjacent sounds. Regularity extraction was not influenced by sequence cohesiveness as manipulated by the relative duration of tones and silent inter-tone-intervals. In the second experiment, we showed that a regularity connecting non-adjacent sounds was discovered only when the intervening sequence also contained a regular pattern, but not when the intervening sounds were randomly varying. This suggests that separate regular patterns are available to the auditory system as a cue for identifying signals coming from distinct sound sources. Thus auditory regularity extraction is not necessarily confined to a processing stage after initial sound grouping, but may precede grouping when other acoustic cues are unavailable.

Page generated in 0.0274 seconds