• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 39
  • 25
  • 18
  • 11
  • 8
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Simultaneity constancy : unifying the senses across time /

Harrar, Vanessa. January 2006 (has links)
Thesis (M.A.)--York University, 2006. Graduate Programme in Psychology. / Typescript. Includes bibliographical references (leaves 70-73). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://proquest.umi.com/pqdweb?index=0&did=1240699451&SrchMode=1&sid=15&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1195058296&clientId=5220
22

Synaesthesia : an essay in philosophical psychology /

Gray, Richard Unknown Date (has links)
Thesis (Ph.D.)--University of Edinburgh, 2001.
23

The Role of Contingency and Gaze Direction in the Emergence of Social Referencing

Molina, Mariana V 07 November 2011 (has links)
The current study assessed the importance of infant detection of contingency and head and eye gaze direction in the emergence of social referencing. Five- to six-month-old infants’ detection of affect-object relations and subsequent manual preferences for objects paired with positive expressions were assessed. In particular, the role of contingency between toys’ movements and an actress’s emotional expressions as well as the role of gaze direction toward the toys’ location were examined. Infants were habituated to alternating films of two toys each paired with an actress’s affective expression (happy and fearful) under contingent or noncontingent and gaze congruent or gaze incongruent conditions. Results indicated that gaze congruence and contingency between toys’ movements and a person’s affective expressions were important for infant perception of affect-object relations. Furthermore, infant perception of the relation between affective expressions and toys translated to their manual preferences for the 3-dimensional toys. Infants who received contingent affective responses to the movements of the toys spent more time touching the toy that was previously paired with the positive expression. These findings demonstrate the role of contingency and gaze direction in the emergence of social referencing in the first half year of life.
24

7- and 12-Month-Olds' Intermodal Recognition of Affect: 7-Month-Olds are "Smarter" than 12-Month-Olds

Whiteley, Mark Oborn 30 June 2011 (has links) (PDF)
Research has shown that by 7-months of age infants demonstrate recognition of emotion by successfully matching faces and voices based on affect in an intermodal matching procedure. It is often assumed that once an ability is present the development of that ability has "ceased." Therefore, no research has examined if and how the ability to match faces and voices based on affect develops after the first 7-months. This study examined how the ability to match faces and voices based on affect changes from 7- to 12-months. Looking at infant's proportion of total looking time (PTLT) results showed that, consistent with previous research, 7-month-old infants looked significantly longer at the affectively congruent facial expression. However, 12-month- olds showed no matching of faces and voices. Further analyses showed that 7-month-olds also increased their looking to facial expressions while being presented with the affectively congruent vocal expression. Once again, 12-month-olds failed to show significant matching. That 7-month- olds were able to demonstrate matching while 12-month-olds failed to do so is possibly a result of 12-month-olds attending to other information. More research is needed to better understand how infants' recognition of affect and overall perceptual abilities change as they develop.
25

Modified postnatal social experience alters intersensory development of bobwhite quail chicks

Columbus, Rebecca F. 18 November 2008 (has links)
Recent studies have begun to explore the features of perinatal experience which facilitate infants’ abilities to integrate information from the various sensory modalities. The present study utilized a precocial avian infant, the bobwhite quail (Colinus virginianus), to explore 1) what types of postnatal social experience young chicks require to successfully pair sights and sounds and 2) when these experiences need to occur to maintain species-typical intersensory development. Specifically, chicks in this study were reared in one of four conditions: with normal siblings, with altered tactile experience, with altered auditory experience, or with altered visual experience. Findings revealed that altered tactile, auditory, and visual experience presented throughout the first 72 hrs of postnatal development delays chicks’ ability to integrate maternal auditory and visual information at 72 hrs of age, a response reliably seen in unmanipulated chicks. Furthermore, results showed that altered sensory experience in any modality presented during the first 36 hrs of postnatal development delays intersensory responsiveness. Altered tactile or auditory sensory information presented during the last 36 hrs of postnatal development also disrupted normal perceptual development, while altered visual information presented during the last 36 hrs of postnatal development failed to disrupt species-typical responsiveness. These findings suggest that normal sensory experience derived from social interaction is important for normal species-typical development. / Master of Science
26

Intersensory Transfer of a Learned Shape Discrimination

Taylor, Ronald D. 08 1900 (has links)
Intersensory transfer of training was systematically investigated for visual to tactual and tactual to visual situations. College students were trained in one modality on a successive-shape-discrimination task, then transferred to the opposite modality to perform a related-shape-discrimination task. The investigation showed successful transfer in both directions, Transfer from vision to touch was specific to the situation wherein all discriminata were exactly the same In the two tasks. In contrast, transfer from touch to vision appeared to be a function of the subjects' ability to retain some type of schematic representation of the primary object as a mediational device to facilitate visual discrimination between the primary object and one of a slightly different shape.
27

Learning to match faces and voices

Unknown Date (has links)
This study examines whether forming a single identity is crucial to learning to bind faces and voices, or if people are equally able to do so without tying this information to an identity. To test this, individuals learned paired faces and voices that were in one of three different conditions: True voice, Gender Matched, or Gender Mismatched conditions. Performance was measured in a training phase as well as a test phase, and results show that participants were able to learn more quickly and have higher overall performance for learning in the True Voice and Gender Matched conditions. During the test phase, performance was almost at chance in the Gender Mismatched condition which may mean that learning in the training phase was simply memorization of the pairings for this condition. Results support the hypothesis that learning to bind faces and voices is a process that involves forming a supramodal identity from multisensory learning. / by Meredith Davidson. / Thesis (M.A.)--Florida Atlantic University, 2010. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2010. Mode of access: World Wide Web.
28

Infants' perception of synthetic-like multisensory relations

Unknown Date (has links)
Studies have shown that human infants can integrate the multisensory attributes of their world and, thus, have coherent perceptual experiences. Multisensory attributes can either specify non-arbitrary (e.g., amodal stimulus/event properties and typical relations) or arbitrary properties (e.g., visuospatial height and pitch). The goal of the current study was to expand on Walker et al.'s (2010) finding that 4-month-old infants looked longer at rising/falling objects when accompanied by rising/falling pitch than when accompanied by falling/rising pitch. We did so by conducting two experiments. In Experiment 1, our procedure matched Walker et al.'s (2010) single screen presentation while in Experiment 2 we used a multisensory paired-preference procedure. Additionally, we examined infants' responsiveness to these synesthetic-like events at multiple ages throughout development (four, six, and 12 months of age). ... In sum, our findings indicate that the ability to match changing visuospatial height with rising/falling pitch does not emerge until the end of the first year of life and throw into doubt Walker et al.'s (2010) claim that 4-month-old infants perceive audiovisual synesthetic relations in a manner similar to adults. / by Nicholas Minar. / Thesis (M.A.)--Florida Atlantic University, 2013. / Includes bibliography. / Mode of access: World Wide Web. / System requirements: Adobe Reader.
29

Spatiotemporal brain dynamics of the resting state

Unknown Date (has links)
Traditionally brain function is studied through measuring physiological responses in controlled sensory, motor, and cognitive paradigms. However, even at rest, in the absence of overt goal-directed behavior, collections of cortical regions consistently show temporally coherent activity. In humans, these resting state networks have been shown to greatly overlap with functional architectures present during consciously directed activity, which motivates the interpretation of rest activity as day dreaming, free association, stream of consciousness, and inner rehearsal. In monkeys, it has been shown though that similar coherent fluctuations are present during deep anesthesia when there is no consciousness. These coherent fluctuations have also been characterized on multiple temporal scales ranging from the fast frequency regimes, 1-100 Hz, commonly observed in EEG and MEG recordings, to the ultra-slow regimes, < 0.1 Hz, observed in the Blood Oxygen Level Dependent (BOLD) signal of functi onal magnetic resonance imaging (fMRI). However, the mechanism for their genesis and the origin of the ultra-slow frequency oscillations has not been well understood. Here, we show that comparable resting state networks emerge from a stability analysis of the network dynamics using biologically realistic primate brain connectivity, although anatomical information alone does not identify the network. We specifically demonstrate that noise and time delays via propagation along connecting fibres are essential for the emergence of the coherent fluctuations of the default network. The combination of anatomical structure and time delays creates a spacetime structure in which the neural noise enables the brain to explore various functional configurations representing its dynamic repertoire. / Using a simplified network model comprised of 3 nodes governed by the dynamics of FitzHugh-Nagumo (FHN) oscillators, we systematically study the role of time delay and coupling strength in the Using a simplified network model comprised of 3 nodes governed by the dynamics of FitzHugh-Nagumo (FHN) oscillators, we systematically study the role of time delay and coupling strength in the generation o f the slow coherent fluctuations. We find that these fluctuations in the BOLD signal are significantly correlated with the level of neural synchrony implicating that transient interareal synchronizations are the mechanism causing the emergence of the ultra slow coherent fluctuations in the BOLD signal. / by Young-Ah Rho. / Thesis (Ph.D.)--Florida Atlantic University, 2009. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2009. Mode of access: World Wide Web.
30

Exploiting audiovisual attention for visual coding

Unknown Date (has links)
Perceptual video coding has been a promising area during the last years. Increases in compression ratios have been reported by applying foveated video coding techniques where the region of interest (ROI) is selected by using a computational attention model. However, most of the approaches for perceptual video coding only use visual features ignoring the auditory component. In recent physiological studies, it has been demonstrated that auditory stimuli affects our visual perception. In this work, we validate some of those physiological tests using complex video sequence. We designed and developed a web-based tool for video quality measurement. After conducting different experiments, we observed that in the general reaction time to detect video artifacts was higher when video was presented with the audio information. We observed that emotional information in audio guide human attention to particular ROI. We also observed that sound frequency change spatial frequency perception in still images. / by Freddy Torres. / Thesis (M.S.C.S.)--Florida Atlantic University, 2013. / Includes bibliography. / Mode of access: World Wide Web. / System requirements: Adobe Reader.

Page generated in 0.0942 seconds