• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 3
  • 1
  • Tagged with
  • 13
  • 13
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Frequency discrimination, integration processes and auditory continuity

Goodacre, Jonathan January 1999 (has links)
No description available.
2

Spectral and temporal integration of brief tones

Hoglund, Evelyn M., January 2007 (has links)
Thesis (Ph. D.)--Ohio State University, 2007. / Title from first page of PDF file. Includes bibliographical references (p. 105-107).
3

Average evoked cortical potentials as an electrophysiologic approach to the study of temporal integration

Don, Manuel January 1967 (has links)
No description available.
4

Auditory Based Modification of MFCC Feature Extraction for Robust Automatic Speech Recognition

Chiou, Sheng-chiuan 01 September 2009 (has links)
The human auditory perception system is much more noise-robust than any state-of theart automatic speech recognition (ASR) system. It is expected that the noise-robustness of speech feature vectors may be improved by employing more human auditory functions in the feature extraction procedure. Forward masking is a phenomenon of human auditory perception, that a weaker sound is masked by the preceding stronger masker. In this work, two human auditory mechanisms, synaptic adaptation and temporal integration are implemented by filter functions and incorporated to model forward masking into MFCC feature extraction. A filter optimization algorithm is proposed to optimize the filter parameters. The performance of the proposed method is evaluated on Aurora 3 corpus, and the procedure of training/testing follows the standard setting provided by the Aurora 3 task. The synaptic adaptation filter achieves relative improvements of 16.6% over the baseline. The temporal integration and modified temporal integration filter achieve relative improvements of 21.6% and 22.5% respectively. The combination of synaptic adaptation with each of temporal integration filters results in further improvements of 26.3% and 25.5%. Applying the filter optimization improves the synaptic adaptation filter and two temporal integration filters, results in the 18.4%, 25.2%, 22.6% improvements respectively. The performance of the combined-filters models are also improved, the relative improvement are 26.9% and 26.3%.
5

Cognitive resources in audiovisual speech perception

BUCHAN, JULIE N 11 October 2011 (has links)
Most events that we encounter in everyday life provide our different senses with correlated information, and audiovisual speech perception is a familiar instance of multisensory integration. Several approaches will be used to further examine the role of cognitive factors on audiovisual speech perception. The main focuses of this thesis will be to examine the influences of cognitive load and selective attention on audiovisual speech perception, as well as the integration of auditory and visual information in talking distractor faces. The influence of cognitive factors on the temporal integration of auditory and visual speech, and gaze behaviour during audiovisual speech will also be addressed. The overall results of the experiments presented here suggest that the integration of auditory and visual speech information is quite robust to various attempts to modulate the integration. Adding a cognitive load task shows minimal disruption of the integration of auditory and visual speech information. Changing attentional instructions to get subjects to selectively attend to either the auditory or visual speech information also has a rather modest influence on the observed integration of auditory and visual speech information. Generally, the integration of temporally offset auditory and visual information seems rather insensitive to cognitive load or selective attentional manipulations. The processing of visual information from distractor faces seems to be limited. The language of the visually articulating distractors doesn't appear to provide information that is helpful for matching together the auditory and visual speech streams. Audiovisual speech distractors are not really any more distracting than auditory distractor speech paired with a still image, suggesting a limited processing or integration of the visual and auditory distractor information. The gaze behaviour during audiovisual speech perception appears to be relatively unaffected by an increase in cognitive load, but is somewhat influenced by attentional instructions to selectively attend to the auditory and visual information. Additionally, both the congruency of the consonant, and the temporal offset of the auditory and visual stimuli have small but rather robust influences on gaze. / Thesis (Ph.D, Psychology) -- Queen's University, 2011-09-30 23:31:07.754
6

Quantifying temporal aspects of low-level multisensory processing in children with autism spectrum disorders : a psychophysical study

Foss-Feig, Jennifer H. January 2008 (has links)
Thesis (M. S. in Psychology)--Vanderbilt University, Aug. 2008. / Title from title screen. Includes bibliographical references.
7

Capture de mouvements humains par capteurs RGB-D / Human motion capture by RGB-D sensors

Masse, Jean-Thomas 25 September 2015 (has links)
L'arrivée simultanée de capteurs de profondeur et couleur, et d'algorithmes de détection de squelettes super-temps-réel a conduit à un regain de la recherche sur la capture de mouvements humains. Cette fonctionnalité constitue un point clé de la communication Homme-Machine. Mais le contexte d'application de ces dernières avancées est l'interaction volontaire et fronto-parallèle, ce qui permet certaines approximations et requiert un positionnement spécifique des capteurs. Dans cette thèse, nous présentons une approche multi-capteurs, conçue pour améliorer la robustesse et la précision du positionnement des articulations de l'homme, et fondée sur un processus de lissage trajectoriel par intégration temporelle, et le filtrage des squelettes détectés par chaque capteur. L'approche est testée sur une base de données nouvelle acquise spécifiquement, avec une méthodologie d'étalonnage adaptée spécialement. Un début d'extension à la perception jointe avec du contexte, ici des objets, est proposée. / Simultaneous apparition of depth and color sensors and super-realtime skeleton detection algorithms led to a surge of new research in Human Motion Capture. This feature is a key part of Human-Machine Interaction. But the applicative context of those new technologies is voluntary, fronto-parallel interaction with the sensor, which allowed the designers certain approximations and requires a specific sensor placement. In this thesis, we present a multi-sensor approach, designed to improve robustness and accuracy of a human's joints positionning, and based on a trajectory smoothing process by temporal integration, and filtering of the skeletons detected in each sensor. The approach has been tested on a new specially constituted database, with a specifically adapted calibration methodology. We also began extending the approach to context-based improvements, with object perception being proposed.
8

Spectral and temporal integration of brief tones

Hoglund, Evelyn M. 23 August 2007 (has links)
No description available.
9

How we remember the emotional intensity of past musical experiences

Schäfer, Thomas, Zimmermann, Doreen, Sedlmeier, Peter 15 September 2014 (has links) (PDF)
Listening to music usually elicits emotions that can vary considerably in their intensity over the course of listening. Yet, after listening to a piece of music, people are easily able to evaluate the music's overall emotional intensity. There are two different hypotheses about how affective experiences are temporally processed and integrated: (1) all moments' intensities are integrated, resulting in an averaged value; (2) the overall evaluation is built from specific single moments, such as the moments of highest emotional intensity (peaks), the end, or a combination of these. Here we investigated what listeners do when building an overall evaluation of a musical experience. Participants listened to unknown songs and provided moment-to-moment ratings of experienced intensity of emotions. Subsequently, they evaluated the overall emotional intensity of each song. Results indicate that participants' evaluations were predominantly influenced by their average impression but that, in addition, the peaks and end emotional intensities contributed substantially. These results indicate that both types of processes play a role: All moments are integrated into an averaged value but single moments might be assigned a higher value in the calculation of this average.
10

How we remember the emotional intensity of past musical experiences

Schäfer, Thomas, Zimmermann, Doreen, Sedlmeier, Peter 15 September 2014 (has links)
Listening to music usually elicits emotions that can vary considerably in their intensity over the course of listening. Yet, after listening to a piece of music, people are easily able to evaluate the music's overall emotional intensity. There are two different hypotheses about how affective experiences are temporally processed and integrated: (1) all moments' intensities are integrated, resulting in an averaged value; (2) the overall evaluation is built from specific single moments, such as the moments of highest emotional intensity (peaks), the end, or a combination of these. Here we investigated what listeners do when building an overall evaluation of a musical experience. Participants listened to unknown songs and provided moment-to-moment ratings of experienced intensity of emotions. Subsequently, they evaluated the overall emotional intensity of each song. Results indicate that participants' evaluations were predominantly influenced by their average impression but that, in addition, the peaks and end emotional intensities contributed substantially. These results indicate that both types of processes play a role: All moments are integrated into an averaged value but single moments might be assigned a higher value in the calculation of this average.

Page generated in 0.0881 seconds