• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 294
  • 147
  • 36
  • 36
  • 36
  • 36
  • 36
  • 36
  • 15
  • 13
  • 8
  • 8
  • 8
  • 8
  • 7
  • Tagged with
  • 619
  • 619
  • 144
  • 143
  • 143
  • 118
  • 78
  • 60
  • 53
  • 50
  • 44
  • 42
  • 41
  • 41
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Brain Mapping of the Latency Epochs in a McGurk Effect Paradigm in Music Performance and Visual Arts Majors

Nordstrom, Lauren Donelle 01 March 2015 (has links)
The McGurk effect is an illusion that occurs when an auditory /ba/ is combined with a visual /ga/. The two stimuli fuse together which leads to the perception of /da/, a sound in between /ba/ and /ga/. The purpose of this study was to determine whether music performance and visual arts majors process mismatched auditory and visual stimuli, like the McGurk effect, differently. Nine syllable pairs were presented to 10 native English speakers (5 music performance majors and 5 visual arts majors between the ages of 18 and 28 years) in a four-forced-choice response paradigm. Data from event-related potentials were recorded for each participant. Results demonstrate that there are differences in the electrophysiological responses to viewing the mismatched syllable pairs. The /ga/ phoneme in the music performance group produced more differences while the /da/ phoneme produced more differences in the visual arts group. The McGurk effect is processed differently in the music performance majors and the visual arts majors; processing begins in the earliest latency epoch in the visual arts group but in the late latency epoch in the music performance group. These results imply that the music performance group has a more complex decoding system than the visual arts group. It also may suggest that the visual arts group is better able to integrate the visual and auditory information to resolve the conflict when mismatched signals are presented.
342

Speech Recognition with Linear and Non-linear Amplification in the Presence of Industrial Noise

Olson, Marcia Ann 10 July 1996 (has links)
In order to help reduce hearing loss, the Occupational Safety and Health Administration regulates noise levels in work environments. However, hearing aids are the primary rehabilitative service provided for individuals with an occupational hearing loss. Very little is being done to monitor hearing aid use in the work environment. Noise which may be safe to an unaided ear can amplified to levels that are damaging to the ear when a hearing aid is being worn. However, it is necessary for some individuals to wear amplification in these noisy environments for safety reasons. As a consequence it is important that these individuals be able to understand speech in the presence of industrial noise while wearing amplification. The purpose of this study was to determine if there is a significant difference in speech intelligibility between linear hearing aids and different types of non-linear hearing aids when they are used in the presence of industrial noise. Twenty-four normal hearing subjects were selected for this study. Each subject was ask to identify words in four CID W-22 lists which had been recorded through a linear hearing aid and two different non-linear hearing aids. Test results showed significantly better word· recognition for the linear in quiet condition over all other conditions. Significantly higher scores were obtained for the TILL condition than were obtained for the Linear in noise and the BILL condition. These preliminary results suggest that an individual wearing amplification in a noisy work environment would benefit with a TILL circuit. The TILL circuit would provide better speech intelligibility in this type of environment. Therefore, providing a safer work environment for the hearing aid user.
343

A comparative study of the short-term auditory memory span and sequence of language/learning disabled children and normal children

McCausland, Kathleen M. 01 January 1978 (has links)
This investigation compared the auditory memory span and sequence of language/learning disabled children with that of normal children to determine if there was a difference between the two groups on short-term auditory memory, ordering of stimulus type difficulty and performance on subtests using various stimulus types. Fifteen LD subjects were matched with fifteen normal subjects for mental age as measured by the Peabody Picture Vocabulary Test. The Auditory Memory Test Battery (AMTB) was administered to each subject. The AMTB consists of five tape recorded subtests of recall for sentences, digits, related words, unrelated words, and nonsense words. Each subject responded verbally to the randomly presented subtests. This resulted in ten scores for each subject: a span score and sequence score for each of the five subtests, with a possible twenty-eight points for each subtest for both span and sequence.
344

The Effects Of Speech Motor Preparation On Auditory Perception

Unknown Date (has links)
Perception and action are coupled via bidirectional relationships between sensory and motor systems. Motor systems influence sensory areas by imparting a feedforward influence on sensory processing termed “motor efference copy†(MEC). MEC is suggested to occur in humans because speech preparation and production modulate neural measures of auditory cortical activity. However, it is not known if MEC can affect auditory perception. We tested the hypothesis that during speech preparation auditory thresholds will increase relative to a control condition, and that the increase would be most evident for frequencies that match the upcoming vocal response. Participants performed trials in a speech condition that contained a visual cue indicating a vocal response to prepare (one of two frequencies), followed by a go signal to speak. To determine threshold shifts, voice-matched or -mismatched pure tones were presented at one of three time points between the cue and target. The control condition was the same except the visual cues did not specify a response and subjects did not speak. For each participant, we measured f0 thresholds in isolation from the task in order to establish baselines. Results indicated that auditory thresholds were highest during speech preparation, relative to baselines and a non-speech control condition, especially at suprathreshold levels. Thresholds for tones that matched the frequency of planned responses gradually increased over time, but sharply declined for the mismatched tones shortly before targets. Findings support the hypothesis that MEC influences auditory perception by modulating thresholds during speech preparation, with some specificity relative to the planned response. The threshold increase in tasks vs. baseline may reflect attentional demands of the tasks. / acase@tulane.edu
345

Tones and vowels in Cantonese infant directed speech : hyperarticulation during the first 12 months of infancy

Xu, Nan, University of Western Sydney, College of Arts, MARCS Auditory Laboratories January 2008 (has links)
In speech, vowels and consonants are two the basic sounds that combined result in lexically meaningful items in all languages. In tone languages, changes in pitch, tone differences also make meaningful lexical distinctions in spoken words. Young infants appear to have no trouble perceiving speech sounds and their production of sounds peculiar to their particular language environment proceeds relatively smoothly and rapidly compared with adults’ acquisition of foreign languages. One way of looking at how infants come to acquire speech sounds of their first language is by examining the speech input they receive. The term infant-directed speech (IDS) has been coined to describe the special way adults and even older children speak to infants. IDS is different to adult-directed speech in various acoustic/phonetic modifications, such as exaggerated prosody, increased pitch and vowel hyperarticulation (Burnham, Kitamura, and Vollmer-Conna, 2002; Kuhl et al., 1997). The exaggerated prosody and increased pitch appear to be related to the expression of affect and gaining infants’ attention (Burnham, Kitamura, and Vollmer-Conna, 2002), whereas vowel hyperarticulation appears to be related to infants’ speech development for a number of reasons. Firstly, investigating how adults speak to foreigners, Uther, Knoll, and Burnham (2007) found that vowels are also hyperarticulated in foreigner-directed speech as in IDS, while other acoustic modifications such as exaggerated prosody and increased pitch, related to affective and attentional factors, are not present in foreigner directed speech. Secondly, Liu, Kuhl, and Tsao (2003) found a positive correlation between vowel hyperarticulation and infants’ native speech perception; mothers who hyperarticulated their vowels more had infants who were better able to discriminate native consonant contrasts.\ While vowel hyperarticulation in IDS to 6-month-olds has been investigated in both tone languages such as Mandarin (Liu et al., 2003), and non-tone languages such as Russian, Swedish, American English (Kuhl et al., 1997) and Australian English (Burnham et al, 2002), no parallel studies have been conducted on the possibility of tone hyperarticulation in tone language IDS. If vowel hyperarticulation is related to infants’ language development then tones in tone hyperarticulated. The possibility of tone as well as vowel hyperarticulation in IDS of the tone language Cantonese, and the development of hyperarticulation across the first 12 months of infancy were investigated here using a longitudinal sequential cohort design. Two groups of native Cantonese mothers were recorded speaking to their infants, the first group at 3, 6, and 9 months, and the second at 6, 9, and 12 months. The study had four main aims (1) to investigate whether tone hyperarticulation occurs in IDS in a tone language Cantonese (2) to investigate whether vowel hyperarticulation occurs in IDS in Cantonese (IDS in this languages had not yet been investigated) and if 1 and 2 are the cases (3) to compare tone and vowel hyperarticulation, and (4) to chart the development of tone and vowel hyperarticulation across the infant’s first 12 months. Contrary to previous findings of vowel hyperarticulation in English Russian, Swedish, and Mandarin IDS to 6-month-olds (Burnham et al., 2002; Kuhl et al., 1997); vowel hyperarticulation was not found for Cantonese IDS. More detailed acoustic analysis examining different dimensions of the vowel space suggest that after the infant is 3 months old, mothers’ vowels begin to be hypoarticulated in IDS compared to ADS on dimensions of back versus front, and high versus low. This pattern of results is consistent with vowel perception studies which suggest that infants have already tuned into the native vowel categories by 4 to 6 months (Polka and Werker, 1994). Tone hyperarticulation, on the other hand, was indeed present at 3 months and increased to peak at 6 to 9 months before declining at 12 months. This pattern of tone hyperarticulation across the first year of infancy is consistent with infant language development – in which attenuation of perception of non-native tones had been found between 6 to 9 months (Mattock and Burnham, 2006). Moreover, detailed phonetic analysis revealed that while the level tones are more hyperarticulated than the contour tones, tones with similar onsets and offsets (i.e., the two rising tones) are actually hypoarticulated in IDS at 9 and 12 months, a time when infants have already tuned into native tones. Finally, results from a preliminary native speech discrimination study using the same infants provide some initial indication that mothers who hyperarticulate tones more also had infants who are better able to discriminate native Cantonese consonants. Together these results suggest that in Cantonese IDS vowels are underspecified whereas tones are consistently over-specified particularly at 6 months when infants are tuning into native tones. Moreover, during this initial period of tone acquisition, only level tones are over specified while tones with similar onsets and offsets are underspecified. It seems likely that for Cantonese language environment infants, during the early stages of language acquisition, pitch information specified by level tones is sufficient for initial acquisition of information about the Cantonese tone space and that information about vowels is not so essential at this time. These studies show that there is indeed tone hyperarticulation in IDS in tone languages, and that in order to make sense of the vowel hyperarticulation data in tone languages, it is important to investigate both vowels and tones in tone languages with complex tone systems such as Cantonese, instead of simply applying Anglocentric notions of vowel hypoarticulation. / Doctor of Philosophy (PhD)
346

The conscious brain : Empirical investigations of the neural correlates of perceptual awareness

Eriksson, Johan January 2007 (has links)
<p>Although consciousness has been studied since ancient time, how the brain implements consciousness is still considered a great mystery by most. This thesis investigates the neural correlates of consciousness by measuring brain activity with functional magnetic resonance imaging (fMRI) while specific contents of consciousness are defined and maintained in various experimental settings. Study 1 showed that the brain works differently when creating a new conscious percept compared to when maintaining the same percept over time. Specifically, sensory and fronto-parietal regions were activated for both conditions but with different activation patterns within these regions. This distinction between creating and maintaining a conscious percept was further supported by Study 2, which in addition showed that there are both differences and similarities in how the brain works when defining a visual compared to an auditory percept. In particular, frontal cortex was commonly activated while posterior cortical activity was modality specific. Study 3 showed that task difficulty influenced the degree of frontal and parietal cortex involvement, such that fronto-parietal activity decreased as a function of ease of identification. This is interpreted as evidence of the non-necessity of these regions for conscious perception in situations where the stimuli are distinct and apparent. Based on these results a model is proposed where sensory regions interact with controlling regions to enable conscious perception. The amount and type of required interaction depend on stimuli and task characteristics, to the extent that higher-order cortical involvement may not be required at all for easily recognizable stimuli.</p>
347

Vocal response times to acoustic stimuli in white whales and bottlenose dolphins

Blackwood, Diane Joyner 30 September 2004 (has links)
Response times have been used to explore cognitive and perceptual processes since 1850 (Donders, 1868). The technique has primarily been applied to humans, birds, and terrestrial mammals. Results from two studies are presented here that examine response times in bottlenose dolphins (Tursiops truncatus) and white whales (Delphinapterus leucas). One study concerned response times to stimuli well above the threshold of perceptibility of a stimulus, and the other concerned response times to stimuli near threshold. Two white whales (Delphinapterus leucas) and five Atlantic bottlenose dolphins (Tursiops truncatus) were presented stimuli well above threshold. The stimuli varied in type (tone versus pulse), amplitude, duration, and frequency. The average response time for bottlenose dolphins was 231.9 ms. The average response time for white whales was 584.1 ms. There was considerable variation between subjects within a species, but the difference between species was also found to be significant. In general, response times decreased with increasing stimulus amplitude. The effect of duration and frequency on response time was unclear. Two white whales (Delphinapterus leucas) and four Atlantic bottlenose dolphins (Tursiops truncatus) were given audiometric tests to determine masked hearing thresholds in open waters of San Diego Bay (Ridgway et al., 1997). Animals were tested at six frequencies over a range from 400 Hz to 30 kHz using pure tones. Hearing thresholds varied from 87.5 dB to 125.5 dB depending on the frequency, masking noise intensity and individual animal. At threshold, median response time across frequencies within each animal varied by about 150 ms. The two white whales responded significantly slower (∼670 msec, p<0.0001) than the four dolphins (∼410 msec). As in terrestrial animals, reaction time became shorter as stimulus amplitude increased (Wells, 1913; Stebbins, 1966). Across the two studies, the dolphins as a group were faster in the abovethreshold study than in the nearthreshold study. White whales had longer response times than bottlenose dolphins in both studies. Analysis of response time with an allometric relation based on weight shows that the difference in weight can explain a significant part of the difference in response time.
348

The conscious brain : Empirical investigations of the neural correlates of perceptual awareness

Eriksson, Johan January 2007 (has links)
Although consciousness has been studied since ancient time, how the brain implements consciousness is still considered a great mystery by most. This thesis investigates the neural correlates of consciousness by measuring brain activity with functional magnetic resonance imaging (fMRI) while specific contents of consciousness are defined and maintained in various experimental settings. Study 1 showed that the brain works differently when creating a new conscious percept compared to when maintaining the same percept over time. Specifically, sensory and fronto-parietal regions were activated for both conditions but with different activation patterns within these regions. This distinction between creating and maintaining a conscious percept was further supported by Study 2, which in addition showed that there are both differences and similarities in how the brain works when defining a visual compared to an auditory percept. In particular, frontal cortex was commonly activated while posterior cortical activity was modality specific. Study 3 showed that task difficulty influenced the degree of frontal and parietal cortex involvement, such that fronto-parietal activity decreased as a function of ease of identification. This is interpreted as evidence of the non-necessity of these regions for conscious perception in situations where the stimuli are distinct and apparent. Based on these results a model is proposed where sensory regions interact with controlling regions to enable conscious perception. The amount and type of required interaction depend on stimuli and task characteristics, to the extent that higher-order cortical involvement may not be required at all for easily recognizable stimuli.
349

Computational auditory saliency

Delmotte, Varinthira Duangudom 07 November 2012 (has links)
The objective of this dissertation research is to identify sounds that grab a listener's attention. These sounds that draw a person's attention are sounds that are considered salient. The focus here will be on investigating the role of saliency in the auditory attentional process. In order to identify these salient sounds, we have developed a computational auditory saliency model inspired by our understanding of the human auditory system and auditory perception. By identifying salient sounds we can obtain a better understanding of how sounds are processed by the auditory system, and in particular, the key features contributing to sound salience. Additionally, studying the salience of different auditory stimuli can lead to improvements in the performance of current computational models in several different areas, by making use of the information obtained about what stands out perceptually to observers in a particular scene. Auditory saliency also helps to rapidly sort the information present in a complex auditory scene. Since our resources are finite, not all information can be processed equally. We must, therefore, be able to quickly determine the importance of different objects in a scene. Additionally, an immediate response or decision may be required. In order to respond, the observer needs to know the key elements of the scene. The issue of saliency is closely related to many different areas, including scene analysis. The thesis provides a comprehensive look at auditory saliency. It explores the advantages and limitations of using auditory saliency models through different experiments and presents a general computational auditory saliency model that can be used for various applications.
350

Data Density and Trend Reversals in Auditory Graphs: Effects on Point Estimation and Trend Identification Tasks

Nees, Michael A. 28 February 2007 (has links)
Auditory graphsdisplays that represent graphical, quantitative information with soundhave the potential to make graphical representations of data more accessible to blind students and researchers as well as sighted people. No research to date, however, has systematically addressed the attributes of data that contribute to the complexity (the ease or difficulty of comprehension) of auditory graphs. A pair of studies examined the role of both data density (i.e., the number of discrete data points presented per second) and the number of trend reversals for both point estimation and trend identification tasks with auditory graphs. For the point estimation task, results showed main effects of both variables, with a larger effect attributable to performance decrements for graphs with more trend reversals. For the trend identification task, a large main effect was again observed for trend reversals, but an interaction suggested that the effect of the number of trend reversals was different across lower data densities (i.e., as density increased from 1 to 2 data points per second). Results are discussed in terms of data sonification applications and rhythmic theories of auditory pattern perception.

Page generated in 0.0366 seconds