• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • Tagged with
  • 84
  • 21
  • 15
  • 9
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Some information processing strategies involved in face recognition

Walker-Smith, Gail Josephine January 1982 (has links)
No description available.
72

Representations of spatial location in language processing

Apel, Jens January 2010 (has links)
The production or comprehension of linguistic information is often not an isolated task decoupled from the visual environment. Rather, people refer to objects or listen to other people describing objects around them. Previous studies have shown that in such situations people either fixate these objects, often multiple times (Cooper, 1974), or they attend to the objects much longer than is required for mere identification (Meyer, Sleiderink, & Levelt, 1998). Most interestingly, during comprehension people also attend to the location of objects even when those objects were removed (Altmann, 2004). The main focus of this thesis was to investigate the role of the spatial location of objects during language processing. The first part of the thesis tested whether attention to objects’ former locations facilitates language production and comprehension processes (Experiments 1-­‐5). In two initial eye-­‐tracking experiments, participants were instructed to name objects that either changed their positions (Experiment 1) or were withdrawn from the computer screen (Experiment 2) during language production. Production was impaired when speakers did not attend to the original position of the objects. Most interestingly, fixating an empty region in which an object was located resulted in faster articulation and initiation times. During the language comprehension tasks, participants were instructed to evaluate facts presented by talking heads appearing in different positions on the computer screen. During evaluation, the talking heads changed position (Experiment 3) or were withdrawn from the screen (Experiments 4-­‐5). People showed a strong tendency to gaze at the centre of the screen and only moved towards the head’s former locations if the screen was empty and if evaluation was not preceded by an intervening task as tested in Experiment 5. Fixating the former location resulted in faster response time but not in better accuracy of evaluation. The second part of this thesis investigated the role of spatial location representations in reading (Experiments 6-­‐7). Specifically, I examined to what extent people reading garden-­‐path sentences regress to specific target words in order to reanalyse the sentences. The results of two eye-­‐tracking experiments showed that readers do not target very precisely. A spatial representation is used, but it appears to be fairly coarse (i.e., only represents whether information is to the left or to the right of fixation). The findings from this thesis give us a clearer understanding of the influence of spatial location information on language processing. In language production particularly, it appears that spatial location is an integral part of the cognitive model and strongly connected with linguistic and visual representations.
73

The role of facial cues to body size on attractiveness and perceived leadership ability

Re, Daniel E. January 2013 (has links)
Facial appearance has a strong effect on leadership selection. Ratings of perceived leadership ability from facial images have a pronounced influence on leadership selection in politics, from low-level municipal elections to the federal elections of the most powerful countries in the world. Furthermore, ratings of leadership ability from facial images of business leaders correlate with leadership performance as measured by profits earned. Two elements of facial appearance that have reliable effects of perceived leadership ability are perceived dominance and attractiveness. These cues have been predictive of leadership choices, both experimentally and in the real-world. Chapters 1 and 2 review research on face components that affect perceived dominance and attractiveness. Chapter 3 discusses how perceived dominance and attractiveness influence perception of leadership ability. Two characteristics that affect both perceived dominance and attractiveness are height and weight. Chapters 4-9 present empirical studies on two recently-discovered facial parameters: perceived height (how tall someone appears from their face) and facial adiposity (a reliable proxy of body mass index that influences perceived weight). Chapters 4 and 5 demonstrate that these facial parameters alter facial attractiveness. Chapters 6, 7, and 8 examine how perceived height and facial adiposity influence perceived leadership ability. Chapter 9 examines how perceived height alters leadership perception in war and peace contexts. Chapter 10 summarises the empirical research reported in the thesis and draws conclusions from the findings. Chapter 10 also lists proposals for future research that could further enhance our knowledge of how facial cues to perceived body size influence democratic leadership selection.
74

Object based attention in visual word processing

Revie, Gavin F. January 2015 (has links)
This thesis focusses on whether words are treated like visual objects by the human attentional system. Previous research has shown an attentional phenomenon that is associated specifically with objects: this is known as “object based attention” (e.g. Egly, Driver & Rafal, 1994). This is where drawing a participant’s attention (cuing) to any part of a visual object facilitates target detection at non-cued locations within that object. That is, the cue elevates visual attention across the whole object. The primary objective of this thesis was to demonstrate this effect using words instead of objects. The main finding of this thesis is that this effect can indeed be found within English words – but only when they are presented in their canonical horizontal orientation. The effect is also highly sensitive to the type of cue and target used. Cues which draw attention to the “wholeness” of the word appear to amplify the object based effect. A secondary finding of this thesis is that under certain circumstances participants apply some form of attentional mapping to words which respects the direction of reading. Participants are faster (or experience less cost) when prompted to move their attention in accord with reading direction than against. This effect only occurs when the word stimuli are used repeatedly during the course of the experiment. The final finding of this thesis is that both the object based attentional effect and the reading direction effect described above can be found using either real words or a non-lexical stimulus: specifically symbol strings. This strongly implies that these phenomena are not exclusively associated with word stimuli, but are instead associated with lower level visual processing. Nonetheless, it is considered highly likely that these processes are involved in the day to day process of reading.
75

Stability from variety : the prototype effect in face recognition

Renfrew, Janelle E. January 2008 (has links)
The central goal of the current thesis was to increase our understanding of how representations of individual faces are built from instances that vary. The prototype effect was used as a tool to probe the nature of our internal face representations. In face recognition, the prototype effect refers to the tendency to recognize, or find familiar, the average image of a face after having studied a series of similar face images. The experiments presented in this thesis investigated the modulating role of different variables on the prototype effect in face recognition. In the study phase, two or more different exemplars based on the same identity were presented. In the test phase, one of the seen exemplars, the unseen prototype, and an unseen exemplar of each studied identity were presented one at a time, and participants were asked to make a recognition judgement about the prior occurrence of either the exact image or the person’s face. Variants of each face identity were either unaltered images of real people’s faces, or they were created artificially by manipulating images of faces using several different techniques. All experiments using artificial variants produced strong prototype effects. The unseen prototype image was recognized more confidently than the actually studied images. This was true even when the variants were so similar that they were barely perceptually discriminable. Importantly, even when participants were given additional exposure to the studied exemplars, no weakening of the prototype effect was observed. Surprisingly, in the experiments using natural images of real people’s faces, no clear recognition advantage for the prototype image was observed. Results suggest that the prototype effect in face recognition might not be tapping an averaging mechanism that operates solely on variations within the same identity.
76

The integration of paralinguistic information from the face and the voice

Watson, Rebecca January 2013 (has links)
We live in a world which bombards us with a huge amount of sensory information, even if we are not always aware of it. To successfully navigate, function and ultimately survive in our environment we use all of the cues available to us. Furthermore, we actually combine this information: doing so allows us not only to construct a richer percept of the objects around us, but actually increases the reliability of our decisions and sensory estimates. However, at odds with our naturally multisensory awareness of our surroundings, the literature addressing unisensory processes has always far exceeded that which examines the multimodal nature of perception. Arguably the most salient and relevant stimuli in our environment are other people. Our species is not designed to operate alone, and so we have evolved to be especially skilled in all those things which enable effective social interaction – this could be engaging in conversation, but equally as well recognising a family member, or understanding the current emotional state of a friend, and adjusting our behaviour appropriately. In particular, the face and the voice both provide us with a wealth of hugely relevant social information - linguistic, but also non-linguistic. In line with work conducted in other fields of multisensory perception, research on face and voice perception has mainly concentrated on each of these modalities independently, particularly face perception. Furthermore, the work that has addressed integration of these two sources by and large has concentrated on the audiovisual nature of speech perception. The work in this thesis is based on a theoretical model of voice perception which not only proposed a serial processing pathway of vocal information, but also emphasised the similarities between face and voice processing, suggesting that this information may interact. Significantly, these interactions were not just confined to speech processing, but rather encompassed all forms of information processing, whether this was linguistic or paralinguistic. Therefore, in this thesis, I concentrate on the interactions between, and integration of face-voice paralinguistic information. In Chapter 3 we conducted a general investigation of neural face-voice integration. A number of studies have attempted to identify the cerebral regions in which information from the face and voice combines; however, in addition to a large number of regions being proposed as integration sites, it is not known whether these regions are selective in the binding of these socially relevant stimuli. We identified firstly regions in the bilateral superior temporal sulcus (STS) which showed an increased response to person-related information – whether this was faces, voices, or faces and voices combined – in comparison to information from objects. A subsection of this region in the right posterior superior temporal sulcus (pSTS) also produced a significantly stronger response to audiovisual as compared to unimodal information. We therefore propose this as a potential people-selective, integrative region. Furthermore, a large portion of the right pSTS was also observed to be people-selective and heteromodal: that is, both auditory and visual information provoked a significant response above baseline. These results underline the importance of the STS region in social communication. Chapter 4 moved on to study the audiovisual perception of gender. Using a set of novel stimuli – which were not only dynamic but also morphed in both modalities – we investigated whether different combinations of gender information in the face and voice could affect participants’ perception of gender. We found that participants indeed combined both sources of information when categorising gender, with their decision being reflective of information contained in both modalities. However, this combination was not entirely equal: in this experiment, gender information from the voice appeared to dominate over that from the face, exerting a stronger modulating effect on categorisation. This result was supported by the findings from conditions which directed to attention, where we observed participants were able to ignore face but not voice information; and also reaction times results, where latencies were generally a reflection of voice morph. Overall, these results support interactions between face and voice in gender perception, but demonstrate that (due to a number of probable factors) one modality can exert more influence than another. Finally, in Chapter 5 we investigated the proposed interactions between affective content in the face and voice. Specifically, we used a ‘continuous carry-over’ design – again in conjunction with dynamic, morphed stimuli – which allowed us to investigate not only ‘direct’ effects of different sets of audiovisual stimuli (e.g., congruent, incongruent), but also adaptation effects (in particular, the effect of emotion expressed in one modality upon the response to emotion expressed in another modality). Parallel to behavioural results, which showed that the crossmodal context affected the time taken to categorise emotion, we observed a significant crossmodal effect in the right pSTS, which was independent of any within-modality adaptation. We propose that this result provides strong evidence that this region may be composed of similarly multisensory neurons, as opposed to two sets of interdigitised neurons responsive to information from one modality or the other. Furthermore, an analysis investigating stimulus congruence showed that the degree of incongruence modulated activity across the right STS, further inferring neural response in this region can be altered depending on the particular combination of affective information contained within the face and voice. Overall, both behavioural and cerebral results from this study suggested that participants integrated emotion from the face and voice.
77

Concurrent processing of visual and auditory information : an assessment of parallel versus sequential processing models / Greg Evans.

Evans, Greg, 1948- January 1994 (has links)
Bibliography : leaves 226-237. / xii, 237 leaves : ill. ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / Thesis (Ph.D.)--University of Adelaide, Dept. of Psychology, 1995
78

Some neural bases of attentional learning

Tai Chih-Ta January 1992 (has links)
No description available.
79

Auditory foreground and background decomposition: New perspectives gained through methodological diversification

Thomaßen, Sabine 11 April 2022 (has links)
A natural auditory scene contains many sound sources each of which produces complex sounds. These sounds overlap and reach our ears at the same time, but they also change constantly. To still be able to follow the sound source of interest, the auditory system must decide where each individual tone belongs to and integrate this information over time. For well-controlled investigations on the mechanisms behind this challenging task, sound sources need to be simulated in the lab. This is mostly done with sine tones arranged in certain spectrotemporal patterns. The vast majority of studies simply interleave two sub-sequences of sine tones. Participants report how they perceive these sequences or they perform a task whose performance measure allows hints on how the scene was perceived. While many important insights have been gained with this procedure, the questions that can be addressed with it are limited and the commonly used response methods are partly susceptible to distortions or only indirect measures. The present thesis enlarged the complexity of the tone sequences and the diversity of perceptual measures used for investigations on auditory scene analysis. These changes are intended to open up new questions and give new perspectives on our knowledge about auditory scene analysis. In detail, the thesis established three-tone sequences as a tool for specific investigations on the perceptual foreground and background processing in complex auditory scenes. In addition, it modifies an already established approach for indirect measures of auditory perception in a way that enables detailed and univocal investigations on background processing. Finally, a new response method, namely a no-report method for auditory perception that might also serve as a method to validate subjective report measures, was developed. This new methodological approach uses eye movements as a measurement tool for auditory perception. With the aid of all these methodological improvements, the current thesis shows that auditory foreground formation is actually more complex than previously assumed since listeners hold more than one auditory source in the foreground without being forced to do so. In addition, it shows that the auditory system prefers a limited number of specific source configurations probably to avoid combinatorial explosion. Finally, the thesis indicates that the formation of the perceptual background is also quite complex since the auditory system holds perceptual organization alternatives in parallel that were basically assumed to be mutually exclusive. Thus, both the foreground and the background follow different rules than expected based on two-tone sequences. However, one finding seems to be true for both kinds of sequences: the impact of the tone pattern on the subjective perception is marginal, be it in two- or three-tone sequences. Regarding the no-report method for auditory perception, the thesis shows that eye movements and the reported auditory foreground formations were in good agreement and it seems like this approach indeed has the potential to become a first no-report measure for auditory perception.:Abstract 3 Acknowledgments 5 List of Figures 8 List of Tables 9 Collaborations 11 1 General Introduction 13 1.1 The auditory foreground 13 1.1.1 Attention and auditory scene analysis 13 1.1.2 Investigating auditory scene analysis with two-tone sequences 16 1.1.3 Multistability 18 1.2 The auditory background 21 1.2.1 Investigating auditory background processing 22 1.3 Measures of auditory perception 23 1.3.1 Report procedures 23 1.3.2 Performance-based measures 26 1.3.3 Psychophysiological measures 27 1.4 Summary and goals of the thesis 30 2 The auditory foreground 33 2.1 Study 1: Foreground formation in three-tone sequences 33 2.1.1 Abstract 33 2.1.2 Introduction 33 2.1.3 Methods 37 2.1.4 Results 43 2.1.5 Discussion 48 2.2 Study 2: Pattern effects in three-tone sequences 53 2.2.1 Abstract 53 2.2.2 Methods 53 2.2.3 Results 54 2.2.4 Discussion 58 2.3 Study 3: Pattern effects in two-tone sequences 59 2.3.1 Abstract 59 2.3.2 Introduction 59 2.3.3 General Methods 63 2.3.4 Experiment 1 – Methods and Results 65 2.3.5 Experiment 2 – Methods and Results 67 2.3.6 Experiment 3 – Methods and Results 70 2.3.7 Discussion 72 3 The auditory background 74 3.1 Study 4: Background formation in three-tone sequences 74 3.1.1 Abstract 74 3.1.2 Introduction 74 3.1.3 Methods 77 3.1.4 Results 82 3.1.5 Discussion 86 4 Audio-visual coupling for investigations on auditory perception 90 4.1 Study 5: Using Binocular Rivalry to tag auditory perception 90 4.1.1 Abstract 90 4.1.2 Introduction 90 4.1.3 Methods 92 4.1.4 Results 100 4.1.5 Discussion 108 5 General Discussion 113 5.1 Short review of the findings 113 5.2 The auditory foreground 114 5.2.1 Auditory foreground formation and attention theories 114 5.2.2 The role of tone pattern in foreground formation 116 5.2.3 Methodological considerations and continuation 117 5.3 The auditory background 118 5.3.1 Auditory object formation without attention 120 5.3.2 Multistability without attention 121 5.3.3 Methodological considerations and continuation 122 5.4 Auditory scene analysis by audio-visual coupling 124 5.4.1 Methodological considerations and continuation 124 5.5 Artificial listening situations and conclusions on natural hearing 126 6 Conclusions 128 References 130
80

The sensorimotor theory of perceptual experience

Silverman, David January 2014 (has links)
The sensorimotor theory is an influential, non-mainstream account of perception and perceptual consciousness intended to improve in various ways on orthodox theories. It is often taken to be a variety of enactivism, and in common with enactivist cognitive science more generally, it de-emphasises the theoretical role played by internal representation and other purely neural processes, giving theoretical pride of place instead to interactive engagements between the brain, non-neural body and outside environment. In addition to offering a distinctive account of the processing that underlies perceptual consciousness, the sensorimotor theory aims to offer a new and improved account the logical and phenomenological character of perceptual experience, and the relation between physical and phenomenal states. Since its inception in a 2001 paper by O'Regan and Noë, the theory has prompted a good deal of increasingly prominent theoretical and practical work in cognitive science, as well as a large body of secondary literature in philosophy of cognitive science and philosophy of perception. In spite of its influential character, many of the theory's most basic tenets are incompletely or ambiguously defined, and it has attracted a number of prominent objections. This thesis aims to clarify the conceptual foundations of the sensorimotor theory, including the key theoretical concepts of sensorimotor contingency, sensorimotor mastery, and presence-as-access, and defends a particular understanding of the respective theoretical roles of internal representation and behavioural capacities. In so doing, the thesis aims to highlight the sensorimotor theory's virtues and defend it from some leading criticisms, with particular attention to a response by Clark which claims that perception and perceptual experience plausibly depend on the activation of representations which are not intimately involved in bodily engagements between the agent and environment. A final part of the thesis offers a sensorimotor account of the experience of temporally extended events, and shows how with reference to this we can better understand object experience.

Page generated in 0.1251 seconds