• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6523
  • 2032
  • 1282
  • 671
  • 474
  • 312
  • 196
  • 196
  • 196
  • 196
  • 196
  • 193
  • 154
  • 120
  • 94
  • Tagged with
  • 14547
  • 2020
  • 1853
  • 1648
  • 1631
  • 1602
  • 906
  • 889
  • 853
  • 821
  • 719
  • 716
  • 680
  • 604
  • 581
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

The education of attention to information specifying loss in altitude /

Hettinger, Lawrence James January 1987 (has links)
No description available.
322

Provisional perspective on the self with a summary of pertinent research, 1959-1969 /

Uhlenberg, Donald Merle January 1971 (has links)
No description available.
323

Etude d'un outil informatique de rééducation chez des enfants présentant des déficits visuo-spatiaux d'origine neurologique

Balançon, Caroline January 2009 (has links) (PDF)
Mémoire d'orthophonie : Médecine : Nancy 1 : 2009. / Titre provenant de l'écran-titre. Bibliogr.
324

Facilitation or interference? the influence of visual cues on the accuracy and control of visually-guided and memory-dependent reaches /

Krigolson, Olave Edouard. January 2003 (has links)
Thesis (M.S.)--Indiana University, 2003. / Includes bibliographical references (leaves 56-64). Also available online (PDF file) by a subscription to the set or by purchasing the individual file.
325

Facilitation or interference? the influence of visual cues on the accuracy and control of visually-guided and memory-dependent reaches /

Krigolson, Olave Edouard. January 2003 (has links)
Thesis (M.S.)--Indiana University, 2003. / Includes bibliographical references (leaves 56-64).
326

Auditory-visual integration of temporal relations in infants

Humphrey, Gary Keith January 1979 (has links)
Three experiments examined auditory-visual integration of temporal relations by infants. In the first experiment infants of 3, 6 and 10 months of age were placed midway between two flashing visual displays. Tones, temporally synchronized to one of the visual displays, emanated from concealed speakers placed midway between the visual displays directly in front of the infants. The visual displays, and corresponding tones differed in temporal rate by a factor of four. No evidence was found for differential looking to the sound-specified visual pattern in any of the three age levels tested. The 3-month-olds showed a strong right-looking bias regardless of visual pattern or temporal rate of the tone, while the 10-month-olds preferred to look at the fast visual pattern regardless of position or tone rate. Both of these biases impaired the effectiveness of the simultaneous presentation paradigm to detect differential looking related to auditory-visual synchrony. Experiments II and III used an habituation methodology which eliminated any effects of position and rate bias. Only 4-month-old infants were tested. In each experiment, one group of infants was first presented with temporally synchronous auditory and visual signals during habituation trials and then nonsynchronous signals during recovery trials. Two other groups of infants, one in each experiment, received the opposite sequence. In Experiment II the auditory and visual signals were spatially congruous, but they were separated by 90° in Experiment III. Since the pulse rate of the visual stimuli was changed for the nonsynchronous trials, a control group was tested which received only the light during habituation and recovery trials. Both groups initially presented with synchronous signals showed habituation and recovery. Neither group presented with nonsynchronous stimuli during habituation trials demonstrated recovery and only the group with the spatially separated sources habituated. The results suggest that 4-month-old infants are able to coordinate the temporal relations between auditory and visual signals. / Arts, Faculty of / Psychology, Department of / Graduate
327

Categorical Perception of Species in Infancy

White, Hannah B. 01 January 2016 (has links)
Although there is a wealth of knowledge on categorization in infancy, there are still many unanswered questions about the nature of category representation in infancy. For example, it is yet unclear whether categories in infancy have well-defined boundaries or what knowledge about species categories young infants have before entering the lab. Using a morphing technique, we linearly altered the proportion of cat versus dog in images and observed how infants reacted to contrasts between pairs of images that either did or did not cross over the categorical boundary. This was done while equating between-category and within-category similarity. Results indicate that infants’ pre-existing categories of cats and dogs are discrete and mutually exclusive. Experiment 2 found that inversion caused a disruption in processing by 6.5- but not 3.5- month-old infants, indicating a developmental change in category representation. These findings demonstrate a propensity to dichotomize early in life that could have implications for social categorizations, such as race and gender. Furthermore, this work extends previous knowledge of infant categorical perception by demonstrating a priori knowledge of familiar species categories and the boundaries between them.
328

Perceptual aftereffects reveal dissociable adaptive coding of faces of different races and sexes

Jaquet, Emma January 2008 (has links)
[Truncated abstract] Recent studies have provided evidence that face-coding mechanisms reference a norm or average face (Leopold, O`Toole, Vetter & Blanz, 2001; Rhodes & Jeffery, 2006). The central aim of this thesis was to establish whether distinct norms, and dissociable neural mechanisms code faces of different race and sex categories. Chapter 1 provides a brief introduction to norm based coding of faces, and reviews evidence for the existence of distinct norms for different races and sexes. Chapter 1 then introduces adaptation as a tool for investigating these ideas. Chapter 2 presents two adaptation studies that examined how faces of different races are coded. The aim of these studies was to determine whether dissociable neural mechanisms (or distinct face norms) code faces of different races. Chinese and Caucasian participants rated the normality of Caucasian and Chinese test faces, before and after adaptation to distorted faces of one race (e.g., 'contracted' Chinese faces; Experiment 1) or distorted faces of both races (e.g., 'contracted' Chinese faces and 'expanded' Caucasian faces; Experiment 2). Following adaptation to faces of one race, there were changes in perceived normality for faces of both races (i.e., perceptual aftereffects), indicating that common neural mechanisms code Chinese and Caucasian faces. However, aftereffects were significantly smaller in faces of the unadapted race suggesting some sensitivity to the race of faces. This sensitivity was also evident in Experiment 2. ... Some dissociability was also found in the coding of faces of different iv sexes. In Experiments 2 and 3, participants adapted to oppositely distorted faces of both sexes. Weak sex-selective aftereffects were found. Taken together, the findings suggest that male and female faces are coded by dissociable but not completely distinct neural populations. Chapter 4 examined whether the aftereffects reported for faces of different races or sexes reflected the adaptation of high-level neural mechanisms tuned to the social category information in faces, or earlier coding mechanisms tuned to simple physical differences between face groups. Chinese and Caucasian participants adapted to oppositely distorted face sets that were the same distance apart on a morph continua. The face sets were either from different race categories (e.g., contracted Chinese faces and expanded Caucasian faces), or from the same race category, (e.g., contracted Chinese faces and expanded caricatured Chinese faces). Larger opposite aftereffects were found when face sets were from different race categories, than when they were from the same race category suggesting that oppositely adapted neural mechanisms are tuned to social category differences rather than simple physical differences in faces. Together, these studies shed new light on how we code faces from different face categories. Specifically, the findings indicate that faces of different races and sexes are coded by both common and race- or sex-selective neural mechanisms. In addition, the findings are consistent with the possibility that race- and sex-selective norms and dimensions are used to code faces in face space. The implications of these findings and possible avenues for future research are discussed.
329

Vocalisations with a better view : hyperarticulation augments the auditory-visual advantage for the detection of speech in noise

Lees, Nicole C., University of Western Sydney, College of Arts January 2007 (has links)
Recent studies have shown that there is a visual influence early in speech processing - visual speech enhances the ability to detect auditory speech in noise. However, identifying exactly how visual speech interacts with auditory processing at such an early stage has been challenging, because this so-called AV speech detection advantage is both highly related to a specific lower-order, signal-based, optic-acoustic relationship between the second formant amplitude and the area of the mouth (F2/Mouth-area), and mediated by higher-order, information-based factors. Previous investigations either have maximised or minimised information-based factors, or have minimised signal-based factors, in order to try to tease out the relative importance of these sources of the advantage, but they have not yet been successful in this endeavour. Maximising signal-based factors has not previously been explored. This avenue was explored in this thesis by manipulating speaking style, hyperarticulated speech was used to maximise signal-based factors, and hypoarticulated speech to minimise signal-based factors - to examine whether the AV speech detection advantage is modified by these means, and to provide a clearer idea of the primary source of visual influence in the AV detection advantage. Two sets of six studies were conducted. In the first set, three recorded speech styles, hyperarticulated, normal, and hypoarticulated, were extensively analysed in physical (optic and acoustic) and perceptual (visual and auditory) dimensions ahead of stimulus selection for the second set of studies. The analyses indicated that the three styles comprise distinctive categories on the Hyper-Hypo continuum of articulatory effort (Lindblom, 1990). Most relevantly, both optically and visually hyperarticulated speech was more informative, and hypoarticulated less informative, than normal speech with regard to signal-based movement factors. However, the F2/Mouth-area correlation was similarly strong for all speaking styles, thus allowing examination of signal-based, visual informativeness on AV speech detection with optic-acoustic association controlled. In the second set of studies, six Detection Experiments incorporating the three speaking styles were designed to examine whether, and if so why, more visually-informative (hyperarticulated) speech augmented, and less visually informative (hypoarticulated) speech attenuated, the AV detection advantage relative to normal speech, and to examine visual influence when auditory speech was absent. Detection Experiment 1 used a two-interval, two-alternative (first or second interval, 2I2AFC) detection task, and indicated that hyperarticulation provided an AV detection advantage greater than for normal and hypoarticulated speech, with less of an advantage for hypoarticulated than for normal speech. Detection Experiment 2 used a single-interval, yes-no detection task to assess responses in signal-absent independent of signal-present conditions as a means of addressing participants’ reports that speech was heard when it was not presented in the 2I2AFC task. Hyperarticulation resulted in an AV detection advantage, and for all speaking styles there was a consistent response bias to indicate speech was present in signal-absent conditions. To examine whether the AV detection advantage for hyperarticulation was due to visual, auditory or auditory-visual factors, Detection Experiments 3 and 4 used mismatching AV speaking style combinations (AnormVhyper, AnormVhypo, AhyperVnorm, AhypoVnorm) that were onset-matched or time-aligned, respectively. The results indicated that higher rates of mouth movement can be sufficient for the detection advantage with weak optic-acoustic associations, but, in circumstances where these associations are low, even high rates of movement have little impact on augmenting detection in noise. Furthermore, in Detection Experiment 5, in which visual stimuli consisted only of the mouth movements extracted from the three styles, there was no AV detection advantage, and it seems that this is so because extra-oral information is required, perhaps to provide a frame of reference that improves the availability of mouth movement to the perceiver. Detection Experiment 6 used a new 2I-4AFC task and the measures of false detections and response bias to identify whether visual influence in signal absent conditions is due to response bias or an illusion of hearing speech in noise (termed here the Speech in Noise, SiN, Illusion). In the event, the SiN illusion occurred for both the hyperarticulated and the normal styles – styles with reasonable amounts of movement change. For normal speech, the responses in signal-absent conditions were due only to the illusion of hearing speech in noise, whereas for hypoarticulated speech such responses were due only to response bias. For hyperarticulated speech there is evidence for the presence of both types of visual influence in signal-absent conditions. It seems to be the case that there is more doubt with regard to the presence of auditory speech for non-normal speech styles. An explanation of past and present results is offered within a new framework -the Dynamic Bimodal Accumulation Theory (DBAT). This is developed in this thesis to address the limitations of, and conflicts between, previous theoretical positions. DBAT suggests a bottom-up influence of visual speech on the processing of auditory speech; specifically, it is proposed that the rate of change of visual movements guides auditory attention rhythms ‘on-line’ at corresponding rates, which allows selected samples of the auditory stream to be given prominence. Any patterns contained within these samples then emerge from the course of auditory integration processes. By this account, there are three important elements of visual speech necessary for enhanced detection of speech in noise. First and foremost, when speech is present, visual movement information must be available (as opposed to hypoarticulated and synthetic speech) Then the rate of change, and opticacoustic relatedness also have an impact (as in Detection Experiments 3 and 4). When speech is absent, visual information has an influence; and the SiN illusion (Detection Experiment 6) can be explained as a perceptual modulation of a noise stimulus by visually-driven rhythmic attention. In sum, hyperarticulation augments the AV speech detection advantage, and, whenever speech is perceived in noisy conditions, there is either response bias to perceive speech or a SiN illusion, or both. DBAT provides a detailed description of these results, with wider-ranging explanatory power than previous theoretical accounts. Predictions are put forward for examination of the predictive power of DBAT in future studies. / Doctor of Philosophy (PhD)
330

The effect of learning on pitch and speech perception : influencing perception of Shepard tones and McGurk syllables using classical and operant conditioning principles

Stevanovic, Bettina, University of Western Sydney, College of Arts, School of Psychology January 2007 (has links)
This thesis is concerned with describing and experimentally investigating the nature of perceptual learning. Ecological psychology defines perceptual learning as a process of educating attention to structural properties of stimuli (i.e., invariants) that specify meaning (i.e., affordances) to the perceiver. Although such definition comprehensively describes the questions of what humans learn to perceive, it does not address the question of how learning occurs. It is proposed in this thesis that the principles of classical and operant conditioning can be used to strengthen and expand the ecological account of perceptual learning. The perceptual learning of affordances is described in terms of learning that a stimulus is associated with another stimulus (classical conditioning), and in terms of learning that interacting with a stimulus is associated with certain consequences (operant conditioning). Empirical work in this thesis investigated the effect of conditioning on pitch and speech perception. Experiments 1, 2, and 3 were designed to modify pitch perception in Shepard tones via tone-colour associative training. During training, Shepard tones were paired with coloured circles in a way that the colour of the circles could be predicted by either the F0 (pitch) or by an F0-irrelevant auditory invariant. Participants were required to identify the colour of the circles that was associated with the tones and they received corrective feedback. Hypotheses were based on the assumption that F0-relevant/F0- irrelevant conditioning would increase/decrease the accuracy of pitch perception in Shepard tones. Experiment 1 investigated the difference between F0-relevant and F0- irrelevant conditioning in a between-subjects design, and found that pitch perception in the two conditions did not differ. Experiments 2 and 3 investigated the effect of F0- relevant and F0-irrelevant conditioning (respectively) on pitch perception using a within subjects (pre-test vs. post-test) design. It was found that the accuracy of pitch perception increased after F0-relevant conditioning, and was unaffected by F0-irrelevant conditioning. The differential trends observed in Experiments 2 and 3 suggest that conditioning played some role in influencing pitch perception. However, the question whether the observed trends were due to the facilitatory effect of F0-relevant conditioning or the inhibitory effect of F0-irrelevant conditioning warrants future investigation. Experiments 4, 5, and 6 were designed to modify the perception of McGurk syllables (i.e., auditory /b/ paired with visual /g/) via consonant-pitch associative training. During training, participants were repeatedly presented with /b/, /d/, and /g/ consonants in falling, flat, and rising pitch contours, respectively. Pitch contour was paired with either the auditory signal (Experiments 4 and 5) or the visual signal (Experiment 6) of the consonant. Participants were required to identify the stop consonants and they received corrective feedback. The perception of McGurk stimuli was tested before and after training by asking participants to identify the stop consonant in each stimulus as /b/ or /d/ or /g/. It was hypothesized that conditioning would increase (1) /b/ responses more in the falling than in the flat/ rising contour conditions, (2) /d/ responses more in the flat than in the falling/ rising contour conditions, and (3) /g/ responses more in the rising than in the falling/flat contour conditions. Support for the hypotheses was obtained in Experiments 5 and 6, but only in one response category (i.e., /b/ and /g/ response categories, respectively). It is suggested that the subtlety of the observed conditioning effect could be enhanced by increasing the salience of pitch contour and by reducing the clarity of auditory/visual invariants that specify consonants. / Doctor of Philosophy (PhD)

Page generated in 0.0671 seconds