• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 69
  • 10
  • 6
  • 6
  • 3
  • 3
  • 3
  • 1
  • Tagged with
  • 133
  • 133
  • 44
  • 44
  • 28
  • 25
  • 16
  • 16
  • 15
  • 14
  • 13
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Dual task performance may be a better measure of cognitive processing in Huntington's disease than traditional attention tests

Vaportzis, Ria, Georgiou-Karistianis, N., Churchyard, A., Stout, J.C. January 2015 (has links)
Yes / Background: Past research has found cancellation tasks to be reliable markers of cognitive decline in Huntington’s disease (HD). Objective: The aim of this study was to extend previous findings by adopting the use of a dual task paradigm that paired cancellation and auditory tasks. Methods: We compared performance in 14 early stage HD participants and 14 healthy controls. HD participants were further divided into groups with and without cognitive impairment. Results: Results suggested that HD participants were not slower or less accurate compared with controls; however, HD participants showed greater dual task interference in terms of speed. In addition, HD participants with cognitive impairment were slower and less accurate than HD participants with no cognitive impairment, and showed greater dual task interference in terms of speed and accuracy. Conclusions: Our findings suggest that dual task measures may be a better measure of cognitive processing in HD compared with more traditional measures. / Supported by the School of Psychological Sciences, Monash University.
42

A clustering model for item selection in visual search

McIlhagga, William H. January 2013 (has links)
No / In visual search experiments, the subject looks for a target item in a display containing different distractor items. The reaction time (RT) to find the target is measured as a function of the number of distractors (set size). RT is either constant, or increases linearly, with set size. Here we suggest a two-stage model for search in which items are first selected and then recognized. The selection process is modeled by (a) grouping items into a hierarchical cluster tree, in which each cluster node contains a list of all the features of items in the cluster, called the object file, and (b) recursively searching the tree by comparing target features to the cluster object file to quickly determine whether the cluster could contain the target. This model is able to account for both constant and linear RT versus set size functions. In addition, it provides a simple and accurate account of conjunction searches (e.g., looking for a red N among red Os and green Ns), in particular the variation in search rate as the distractor ratio is varied.
43

Cardiac Vagal Tone & Attentional Control Settings in Adaptive Choice

Speller, Lassiter Freeman, M.A. 05 October 2021 (has links)
No description available.
44

Investigation of Capabilities of Observers in a Watch Window Study

Eziolisa, Ositadimma Nnanna 04 June 2014 (has links)
No description available.
45

The eyes as a window to the mind: inferring cognitive state from gaze patterns

Boisvert, Jonathan 22 March 2016 (has links)
In seminal work, Yarbus examined the characteristic scanpaths that result when viewing an image, observing that scanpaths varied significantly depending on the question posed to the observer. While early efforts examining this hypothesis were equivocal, it has since been established that aspects of an observer’s assigned task may be inferred from their gaze. In this thesis we examine two datasets that have not previously been considered involving prediction of task and observer sentiment respectively. The first of these involves predicting general tasks assigned to observers viewing images, and the other predicting subjective ratings recorded after viewing advertisements. The results present interesting observations on task groupings and affective dimensions of images, and the value of various measurements (gaze or image based) in making these predictions. Analysis also demonstrates the importance of how data is partitioned for predictive analysis, and the complementary nature of gaze specific and image derived features. / May 2016
46

Tag clouds in software visualisation.

Emerson, Jessica Merrill Thurston January 2014 (has links)
Developing and maintaining software is a difficult task, and finding effective methods of understanding software is more necessary now than ever with the last few decades seeing a dramatic climb in the scale of software. Appropriate visualisations may enable greater understanding of the datasets we deal with in software engineering. As an aid for sense-making, visualisation is widely used in daily life (through graphics such as weather maps and road signs), as well as in other research domains, and is thought to be exceedingly beneficial. Unfortunately, there has not been widespread use of the multitude of techniques which have proposed for the software engineering domain. Tag clouds are a simple, text-based visualisation commonly found on the internet. Typically, implementations of tag clouds have not included rich interactive features which are necessary for data exploration. In this thesis, I introduce design considerations and a task set for enabling interaction in a tag cloud visualisation system. These considerations are based on an analysis of challenges in visualising software engineering data, and the perceptive influences of visual properties available in tag clouds. The design and implementation of interactive system Taggle based on these considerations is also presented, along with its broad-based evaluation. Evaluation approaches were informed by a systematic mapping study of previous tag cloud evaluation, providing an overview of existing research in the domain. The design of Taggle was improved following a heuristic evaluation by domain experts. Subsequent evaluations were divided into two parts - experiments focused on the tag cloud visualisation technique itself, and a task-based approach focused on the whole interactive system. As evidenced in the series of evaluative studies, the enhanced tag cloud features incorporated into Taggle enabled faster visual search response time, and the system could be used with minimal training to discover relevant information about an unknown software engineering dataset.
47

Tag Clouds in Software Visualisation

Emerson, Jessica Merrill Thurston January 2014 (has links)
Developing and maintaining software is a difficult task, and finding effective methods of understanding software is more necessary now than ever with the last few decades seeing a dramatic climb in the scale of software. Appropriate visualisations may enable greater understanding of the datasets we deal with in software engineering. As an aid for sense-making, visualisation is widely used in daily life (through graphics such as weather maps and road signs), as well as in other research domains, and is thought to be exceedingly beneficial. Unfortunately, there has not been widespread use of the multitude of techniques which have proposed for the software engineering domain. Tag clouds are a simple, text-based visualisation commonly found on the internet. Typically, implementations of tag clouds have not included rich interactive features which are necessary for data exploration. In this thesis, I introduce design considerations and a task set for enabling interaction in a tag cloud visualisation system. These considerations are based on an analysis of challenges in visualising software engineering data, and the perceptive influences of visual properties available in tag clouds. The design and implementation of interactive system Taggle based on these considerations is also presented, along with its broad-based evaluation. Evaluation approaches were informed by a systematic mapping study of previous tag cloud evaluation, providing an overview of existing research in the domain. The design of Taggle was improved following a heuristic evaluation by domain experts. Subsequent evaluations were divided into two parts - experiments focused on the tag cloud visualisation technique itself, and a task-based approach focused on the whole interactive system. As evidenced in the series of evaluative studies, the enhanced tag cloud features incorporated into Taggle enabled faster visual search response time, and the system could be used with minimal training to discover relevant information about an unknown software engineering dataset.
48

Facilitating visual target identification using non-visual cues

Ngo, Mary Kim January 2012 (has links)
The research presented in this thesis was designed to investigate whether and how the temporal synchrony and spatial congruence of non-visual cues with visual targets could work together to improve the discrimination and identification of visual targets in neurologically-healthy adult humans. The speed and accuracy of participants’ responses were compared following the presence or absence of temporally synchronous and/or spatially congruent or incongruent auditory, vibrotactile, and audiotactile cues in the context of dynamic visual search and rapidly-masked visual target identification. The understanding of the effects of auditory, vibrotactile, and audiotactile cues derived from these laboratory-based tasks was then applied to an air traffic control simulation involving the detection and resolution of potential conflicts (represented as visual targets amidst dynamic and cluttered visual stimuli). The results of the experiments reported in this thesis demonstrate that, in the laboratory-based setting, temporally synchronous and spatially informative non-visual cues both gave rise to significant improvements in participants’ performance, and the combination of temporal and spatial cuing gave rise to additional improvements in visual target identification performance. In the real-world setting, however, only the temporally synchronous unimodal auditory and bimodal audiotactile cues gave rise to a consistent facilitation of participants’ visual target detection performance. The mechanisms and accounts proposed to explain the effects of spatial and temporal cuing, namely multisensory integration and attention, are examined and discussed with respect to the observed improvements in participants’ visual target identification performance.
49

Advancing large scale object retrieval

Arandjelovic, Relja January 2013 (has links)
The objective of this work is object retrieval in large scale image datasets, where the object is specified by an image query and retrieval should be immediate at run time. Such a system has a wide variety of applications including object or location recognition, video search, near duplicate detection and 3D reconstruction. The task is very challenging because of large variations in the imaged object appearance due to changes in lighting conditions, scale and viewpoint, as well as partial occlusions. A starting point of established systems which tackle the same task is detection of viewpoint invariant features, which are then quantized into visual words and efficient retrieval is performed using an inverted index. We make the following three improvements to the standard framework: (i) a new method to compare SIFT descriptors (RootSIFT) which yields superior performance without increasing processing or storage requirements; (ii) a novel discriminative method for query expansion; (iii) a new feature augmentation method. Scaling up to searching millions of images involves either distributing storage and computation across many computers, or employing very compact image representations on a single computer combined with memory-efficient approximate nearest neighbour search (ANN). We take the latter approach and improve VLAD, a popular compact image descriptor, using: (i) a new normalization method to alleviate the burstiness effect; (ii) vocabulary adaptation to reduce influence of using a bad visual vocabulary; (iii) extraction of multiple VLADs for retrieval and localization of small objects. We also propose a method, SCT, for extremely low bit-rate compression of descriptor sets in order to reduce the memory footprint of ANN. The problem of finding images of an object in an unannotated image corpus starting from a textual query is also considered. Our approach is to first obtain multiple images of the queried object using textual Google image search, and then use these images to visually query the target database. We show that issuing multiple queries significantly improves recall and enables the system to find quite challenging occurrences of the queried object. Current retrieval techniques work only for objects which have a light coating of texture, while failing completely for smooth (fairly textureless) objects best described by shape. We present a scalable approach to smooth object retrieval and illustrate it on sculptures. A smooth object is represented by its imaged shape using a set of quantized semi-local boundary descriptors (a bag-of-boundaries); the representation is suited to the standard visual word based object retrieval. Furthermore, we describe a method for automatically determining the title and sculptor of an imaged sculpture using the proposed smooth object retrieval system.
50

The interaction between visual resolution and task-relevance in guiding visual selective attention

Peterson, Jared Joel January 1900 (has links)
Doctor of Philosophy / Department of Psychological Sciences / Lester C. Loschky / Visual resolution (i.e., blur or clarity) is a natural aspect of vision. It has been used by film makers to direct their audience’s attention by focusing the depth of field such that the critical region in a scene is uniquely clear and the surrounding is blurred. Resolution contrast can focus attention towards unique clarity, as supported by previous eye tracking and visual search research (Enns & MacDonald, 2013; Kosara, Miksch, Hauser, Schrammel, Giller, & Tscheligi, 2002; & McConkie, 2002; Peterson, 2016; Smith & Tadmor, 2012). However, little is known about how unique blur is involved in guiding attention (e.g., capture, repel, or be ignored). Peterson (2016) provided reaction time (RT) evidence that blur is ignored by selective attention when resolution is not task-relevant. Perhaps visual resolution is a search asymmetry where unique clarity can be used to guide selective attention during search, but unique blur cannot guide attention. Yet, perhaps the RT evidence was not sensitive enough with Peterson’s (2016) methodology to observe unique blur capturing or repelling attention. Eye movements (e.g., letter first fixated) may be more sensitive than RT as it measures blur and clarity’s influence on guiding attention earlier in a trial. The current study conducted three experiments that investigated: a) how visual resolution guides attention when it is task-irrelevant (Exp. 1), b) whether visual resolution is a search asymmetry, by manipulating resolution’s task-relevance (Use Blur, Use Clarity, Do Not Use Unique Blur or Clarity, & No Instructions) (Exp. 2), and c) whether blur and/or clarity are processed preattentively or require attention (Exp. 3). Experiments 1 and 2 manipulated blur and clarity (Exp. 1 Resolution = Task-irrelevant & Exp. 2 Resolution = Task-relevant), during a rotated L and T visual search measuring RT and eye movements. Experiment 1 found with the more sensitive eye movement measures that unique clarity strongly captured attention while unique blur weakly repelled attention towards nearby clarity (or clarity, especially that close to blur, captured attention). Experiment 2 found evidence that visual resolution is not a search asymmetry because the influence of resolution on selective attention was contingent upon its task-relevance, which theoretically supports the presence of a reconfigurable resolution feature detector. Experiment 3 used a feature search for either blur or clarity (i.e., resolution was task-relevant) and compared RT x Set Size search slopes. Both blurred and clear target present RT x Set Size search slopes were ~ 1 msec/item. The results strongly supported that blur and clarity are both processed preattentively, and provided additional evidence that resolution is not a search asymmetry. Overall, the current studies shed light on how visual resolution is processed and guides selective attention. The results revealed that visual resolution is processed preattentively and has a dynamic relationship with selective attention. Predicting how resolution will guide attention requires knowledge of whether resolution is task relevant or irrelevant. By increasing our understanding of how resolution contrast guides attention, we can potentially apply this knowledge to direct viewers’ attention more efficient using computer screens and heads-up displays.

Page generated in 0.0247 seconds