• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 18
  • 18
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An information processing approach to the performance of perceptually guided action

Greening, Sarah Jane January 1994 (has links)
The series of experiments reported in this thesis concern the ability to make perceptual-motor judgements of distance (Ex. 1 to Ex. 7) and size (Ex. 8). Experiments 1 and 2 indicated that visual judgements of maximum step length were effected by; distance from the site of action, the angle at which the obstacle was presented and whether monocular or binocular vision was used. This suggested that perceived maximum ability was not based on a body scaled invariant as suggested by Gibson (1979). Experiments 3 and 4 were designed to investigate the effect of altering the length of distance to-be-remembered, and compared performance across both visual and kinaesthetic conditions. The results suggested that the reproduction of distance is normally based on memory for the location of the end point, rather than the extent of the distance. No support was found for the claim that differences between the accuracy of recall of location and extent was due to the differential rehearsability of visual and kinaesthetic codes. Instead, it was proposed that changes in the procedure may have influenced performance by reducing the usefulness of a 'landmark' based form of coding in the extent trials. Experiments 5 and 6 were designed to investigate predictions arising from one of the dominant models of cross-modal performance (Connolly and Jones, 1970). Connolly and Jones's model postulated that differences between intra- and cross-modal performance could be explained in terms of the characteristics of modality specific short-term storage codes, and that translation between codes occurs prior to short-term storage. In general the results obtained were supportive of the pattern of accuracy reported by Connolly and Jones. However, the effect of delaying until the end of the retention interval knowledge of the reproduction mode was inconsistent with the model, that is, withholding information about the required reproduction mode appeared to increase the accuracy of judgements. One explanation for this effect is that pre-translated information was held in a form which was associated with high levels of both accuracy and attention. This speculative explanation was seen to have parallels with the Working Memory model (Baddeley and Hitch, 1974). Experiments 7 and 8 used an interference task paradigm to investigate whether a separate visuo-spatial store could be demonstrated to exist in relation to perceptual-motor information. The results failed to find conclusive support for such a store. The cumulative findings of Experiments 1 to 8 are discussed in relation general models of perceptual-motor performance.
2

Movement correlation as a nonverbal cue in the perception of affiliation in thin slices of behaviour

Latif, NIDA 13 September 2012 (has links)
Our perceptual systems can create a rich representation of the social cues gathered during social interaction. Very brief exposures or ‘thin slices’ of behavioural and linguistic information are sufficient for making accurate judgments regarding social situations and building these social representations. This is akin to our accurate recognition of static visual stimuli with brief exposures to a scene in the study of scene gist (Oliva, 2005). This thesis examines a specific social cue during social interaction - how the correlation of movement between two people varies as a result of their affiliation. Further, this thesis investigates how we perceive that behavioural cue when making judgments of affiliation while observing conversation. It has already been established that there is coordination of linguistic and behavioural information during social interaction (Ambady & Rosenthal, 1992). This coordination is more prominent when individuals are familiar with each than when they are not (Dunne & Ng, 1994). The first study in this thesis quantifies the variation in the coordination of movement between two people in conversation based on their affiliation. Results demonstrate that the correlation of movements between friends is greater than the correlation during stranger interaction. This experiment demonstrates that movement varies as a result of affiliation and that people could use this coordination as a cue when making accurate judgments of affiliation while observing social interaction. The second study used the analysis of movement correlation to examine how correlation serves as a cue for accuracy of affiliation judgment by observers. Results demonstrate that although correlation was not a significant cue in affiliation perception, participants could indeed do the perceptual task. These results suggest that the perception of social information is multi-faceted and many cues contribute to its perception. These findings are discussed in terms of our sensitivity to more specific movement correlations as opposed to the global correlations used in this study. These studies highlight the need for further investigation in how behavioural cues function within the judgment of social information. / Thesis (Master, Psychology) -- Queen's University, 2012-09-12 17:25:36.484
3

Target template guidance of eye movements during real-world search

Malcolm, George Law January 2010 (has links)
Humans must regularly locate task-relevant objects when interacting with the world around them. Previous research has identified different types of information that the visual system can use to help locate objects in real-world scenes, including low-level image features and scene context. However, previous research using object arrays suggest that there may be another type of information that can guide real-world search: target knowledge. When a participant knows what a target looks like they generate and store a visual representation, or template, of it. This template then facilitates the search process. A complete understanding of real-world search needs to identify how a target template guides search through scenes. Three experiments in Chapter 2 confirmed that a target template facilitates realworld search. By using an eye-tracker target knowledge was found to facilitate both scanning and verification behaviours during search, but not the search initiation process. Within the scanning epoch a target template facilitated gaze directing and shortened fixation durations. These results suggest that target knowledge affects both the activation map, which selects which regions of the scene to fixate, and the evaluation process that compares a fixated object to the internal representation of the target. With the exact behaviours that a target template facilitates now identified, Chapter 3 investigated the role that target colour played in template-guided search. Colour is one of the more interesting target features as it has been shown to be preferred by the visual system over other features when guiding search through object arrays. Two real-world search experiments in Chapter 3 found that colour information had its strongest effect on the gaze directing process, suggesting that the visual system relies heavily on colour information when searching for target-similar regions in the scene percept. Although colour was found to facilitate the evaluation process too, both when rejecting a fixated object as a distracter and accepting it as the target, this behaviour was found to be influenced comparatively less. This suggests that the two main search behaviours – gaze directing and region evaluation – rely on different sets of template features. The gaze directing process relies heavily on colour information, but knowledge of other target features will further facilitate the evaluation process. Chapter 4 investigated how target knowledge combined with other types of information to guide search. This is particularly relevant in real-world search where several sources of guidance information are simultaneously available. A single experiment investigated how target knowledge and scene context combined to facilitate search. Both information types were found to facilitate scanning and verification behaviours. During the scanning epoch both facilitated the eye guidance and object evaluation processes. When both information sources were available to the visual system simultaneously, each search behaviour was facilitated additively. This suggests that the visual system processes target template and scene context information independently. Collectively, the results indicate not only the manner in which a target template facilitates real-world search but also updates our understanding of real-world search and the visual system. These results can help increase the accuracy of future realworld search models by specifying the manner in which our visual system utilises target template information, which target features are predominantly relied upon and how target knowledge combines with other types of guidance information.
4

Dissociating eye-movements and comprehension during film viewing

Hutson, John January 1900 (has links)
Master of Science / Department of Psychological Sciences / Lester Loschky / Film is a ubiquitous medium. However, the process by which we comprehend film narratives is not well understood. Reading research has shown a strong connection between eye-movements and comprehension. In four experiments we tested whether the eye-movement and comprehension relationship held for films. This was done by manipulating viewer comprehension by starting participants at different points in a film, and then tracking their eyes. Overall, the manipulation created large differences in comprehension, but only found small difference in eye-movements. In a condition of the final experiment, a task manipulation was designed to prioritize different stimulus features. This task manipulation created large differences in eye-movements when compared to participants freely viewing the clip. These results indicate that with the implicit task of narrative comprehension, top-down comprehension processes have little effect on eye-movements. To allow for strong, volitional top-down control of eye-movements in film, task manipulations need to make features that are important to comprehension irrelevant to the task.
5

Task-specific learning supports control over visual distraction

Cosman, Joshua Daniel 01 May 2012 (has links)
There is more information in the visual environment than we can process at a given time, and as a result selective attention mechanisms have developed that allow us to focus on information that is relevant to us while ignoring information that is not. It is often assumed that our ability to overcome distraction by irrelevant information in the environment requires conscious, effortful processing, and traditional theories of selective attention have emphasized the role of an observer's explicit intentions in driving this control. At the same time, effortful control on the basis of explicit processes may be maladaptive when the behaviors to be executed are complex and dynamic, as is the case with many behaviors that we carry out on a daily basis. One way to increase the efficiency of this process would be to store information regarding past experiences with a distracting stimulus, and use this information to control distraction upon future encounters with that particular stimulus. The focus of the current thesis was to examine such a "learned control" view of distraction, where experience with particular stimuli is the critical factor determining whether or not a salient stimulus will capture attention and distract us in a given situation. In Chapters 2 through 4, I established a role for task-specific learning in the ability of observers to overcome attentional capture, showing that experience with particular attributes of distracting stimuli and the context in which the task was performed led to a predictable decrease in capture. In Chapter 5, I examined the neural basis of these learned control effects, and the results suggest that neocortical and medial temporal lobe learning mechanisms both contribute to the experience-dependent modulation of attentional capture observed in Chapters 2-4. Based on these results, a model of attentional capture was proposed in which experience with particular stimulus attributes and their context critically determine the ability of salient, task-irrelevant information to capture attention and cause distraction. I conclude that although explicit processes may play some role in this process under some conditions, much of our ability to overcome distraction results directly from past experience with the visual world.
6

Top-down effects on attentional selection in dynamic scenes and subsequent memory: attitude congruence and social vigilantism in political videos

Hutson, John Patrick January 1900 (has links)
Doctor of Philosophy / Department of Psychological Sciences / Lester C. Loschky / Political videos are created as persuasive media, and at a basic level that persuasion would require that the videos guide viewer attention to the relevant persuasive content. Recent work has shown that filmmakers have techniques that allow them to guide where viewers look, and this guidance occurs even when viewers have very different understandings of the film. The current research tested if these attentional effects carry over to political videos, or if the top-down factors of attitude congruence and social vigilantism, belief superiority and the tendency to impress one’s “superior” beliefs on others (O'Dea, Bueno, & Saucier, 2018; Saucier & Webster, 2010; Saucier, Webster, Hoffman, & Strain, 2014), will break the ability of videos to guide viewers’ attention. Attentional selection was measured through participants’ eye movements, and memory encoding was measured through recall and recognition for both verbal and visual information. Three overarching competing hypotheses predicted different relationships between attitude congruence, social vigilantism, and visual attention and memory. The Tyranny of Film Hypothesis predicted that the videos would guide viewer attention, regardless of attitude congruence. This would result in similar eye-movements and memory for all participants. The Selective Exposure Hypothesis predicted that participants would avoid processing attitude-incongruent information. As a result, viewers’ visual attention would be directed away from attitude-incongruent information, and subsequent memory would be worse. Lastly, the Social Vigilantism Hypothesis predicted that people high in Social Vigilantism would engage more with attitude-incongruent information. Two experiments tested these hypotheses. The first was the Memory experiment (conducted online), and the second was the Eye movement experiment. In each experiment, participants watched a series of political advertisement and debate videos, and attitudes were measured to identify which information in the videos was attitude-congruent and incongruent. The Memory experiment showed some support for the Social Vigilantism Hypothesis, with People high in Social Vigilantism having better memory for attitude-incongruent information on certain memory measures. Conversely, the Eye movement experiment consistently showed strong stimulus driven effects in support of the Tyranny of Film, but also weaker attitude and social vigilantism effects that were independent of attitude congruence. Altogether, these results show dynamic video stimuli features are the best predictors of viewer attention and memory, but viewer attitude and social vigilantism have subtle top-down effects. The support for different hypotheses between the two experiments indicates the strength of top-down effects may depend on the format of the viewing experience, and specifically how much control the viewer has over the experience.
7

Assessing EEG neuroimaging with machine learning

Stewart, Andrew David January 2016 (has links)
Neuroimaging techniques can give novel insights into the nature of human cognition. We do not wish only to label patterns of activity as potentially associated with a cognitive process, but also to probe this in detail, so as to better examine how it may inform mechanistic theories of cognition. A possible approach towards this goal is to extend EEG 'brain-computer interface' (BCI) tools - where motor movement intent is classified from brain activity - to also investigate visual cognition experiments. We hypothesised that, building on BCI techniques, information from visual object tasks could be classified from EEG data. This could allow novel experimental designs to probe visual information processing in the brain. This can be tested and falsified by application of machine learning algorithms to EEG data from a visual experiment, and quantified by scoring the accuracy at which trials can be correctly classified. Further, we hypothesise that ICA can be used for source-separation of EEG data to produce putative activity patterns associated with visual process mechanisms. Detailed profiling of these ICA sources could be informative to the nature of visual cognition in a way that is not accessible through other means. While ICA has been used previously in removing 'noise' from EEG data, profiling the relation of common ICA sources to cognitive processing appears less well explored. This can be tested and falsified by using ICA sources as training data for the machine learning, and quantified by scoring the accuracy at which trials can be correctly classified using this data, while also comparing this with the equivalent EEG data. We find that machine learning techniques can classify the presence or absence of visual stimuli at 85% accuracy (0.65 AUC) using a single optimised channel of EEG data, and this improves to 87% (0.7 AUC) using data from an equivalent single ICA source. We identify data from this ICA source at time period around 75-125 ms post-stimuli presentation as greatly more informative in decoding the trial label. The most informative ICA source is located in the central occipital region and typically has prominent 10-12Hz synchrony and a -5 μV ERP dip at around 100ms. This appears to be the best predictor of trial identity in our experiment. With these findings, we then explore further experimental designs to investigate ongoing visual attention and perception, attempting online classification of vision using these techniques and IC sources. We discuss how these relate to standard EEG landmarks such as the N170 and P300, and compare their use. With this thesis, we explore this methodology of quantifying EEG neuroimaging data with machine learning separation and classification and discuss how this can be used to investigate visual cognition. We hope the greater information from EEG analyses with predictive power of each ICA source quantified by machine learning separation and classification and discuss how this can be used to investigate visual cognition. We hope the greater information from EEG analyses with predictive power of each ICA source quantified by machine learning might give insight and constraints for macro level models of visual cognition.
8

Outdoor Scenes for Data Visualization

Hillery, Benjamin A. 22 April 2011 (has links) (PDF)
Recent cognitive research indicates that the human brain possesses special abilities for processing information relating to outdoor scenes. Simulated outdoor scenes are presented as an effective method of displaying higher dimensional data in an efficient and comprehensible manner. We demonstrate novel methods of using outdoor objects and scenes to display multidimensional content in a way that that is intuitive for humans to understand, and to exploit various cues commonly found in scenes from the natural world to communicate the values of multiple variables.
9

Why We Draw: An Exploration Into How and Why Drawing Works

Mills, Jonathan Edward 28 June 2010 (has links)
Visual information allows us to experience concepts in a way that is analogous to the real world; an image represents the semantic meaning of a concept and does so without conforming to the structural or syntactic rules of standard language. Drawing is therefore an agile form of communication, able to maneuver around barriers that impede the exchange of ideas between one profession and another where the difference in cultural dialects gives rise to translation complications. This thesis argues that the value of visual information lies not in the final, finished images, but during the creation of those images, during the action of drawing. If drawings are generally considered a form of communication, then drawing is a form of visual conversation; much like spoken language, its message unfolds as it is performed, and we make meaning from that performance. Following an exploration of the visual and cognitive systems integral to interpreting visual information, a discussion of language structure and sources of language conflict sets the stage for employing the act of drawing as a collaborative tool in cross-disciplinary settings. Proposed is a set of principles guiding this use of drawing which builds upon the research findings herein. These principles are structured to be usable by all professions, regardless of artistic background or traditional practice, and to encourage a reevaluation of drawingâ s role in the problem-solution process. / Master of Science
10

Vision and the experience of built environments: two visual pathways of awareness, attention and embodiment in architecture

Rooney, Kevin Kelley January 1900 (has links)
Doctor of Philosophy / Environmental Design and Planning Program / Robert J. Condia / The unique contribution of Vision and the Experience of Built Environments is its specific investigation into the visual processing system of the mind in relationship with the features of awareness and embodiment during the experience of architecture. Each facet of this investigation reflects the essential ingredients of sensation (the visual system), perception (our awareness), and emotions (our embodiment) respectively as a process for aesthetically experiencing our built environments. In regards to our visual system, it is well established in neuroscience that human vision divides into the central and peripheral fields of view. Central vision extends from the point of gaze (where we are looking) out to about 5° of visual angle (the width of one’s fist at arm’s length), while peripheral vision is the vast remainder of the visual field. These visual fields project to the parvo and magno ganglion cells which process distinctly different types of information from the world around us and project that information to the ventral and dorsal visual streams respectively. Building on the dorsal/ventral stream dichotomy, we can further distinguish between focal processing of central vision and ambient processing of peripheral vision. Thus, our visual processing of, and attention to, objects and scenes depends on how and where these stimuli fall on the retina. Built environments are no exception to these dependencies, specifically in terms of how focal object perception and ambient spatial perception create intellectual and phenomenal experiences respectively with architecture. These two forms of visual processing limit and guide our perception of the built world around us and subsequently our projected and extended embodied interactions with it as manifested in the act of aesthetic experience. By bringing peripheral vision and central vision together in a balanced perspective we will more fully understand that our aesthetic relationship with our built environment is greatly dependent on the dichotomous visual mechanisms of awareness and embodiment.

Page generated in 0.1007 seconds