• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1422
  • 53
  • 35
  • 18
  • 12
  • 8
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 2085
  • 2085
  • 285
  • 241
  • 221
  • 200
  • 192
  • 174
  • 173
  • 168
  • 159
  • 154
  • 152
  • 150
  • 129
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Measures and Models of Covert Visual Attention in Neurotypical Function and ADHD

Mihali, Andra 01 August 2018 (has links)
<p> Covert attention allows us to prioritize relevant objects from the visual environment without directing our gaze towards them. Attention affects the quality of perceptual representations, quality which can be quantified with precision (or its inverse, variability) parameters in simple psychophysical models that capture the relationship between stimulus strength and an observer&rsquo;s behavior. Two main types of attention, divided and selective, have been studied in the recent decades with two corresponding classic paradigms, visual search and visual spatial orienting. </p><p> In this thesis, we developed variants of these tasks to address questions related to visual attention, in neurotypicals and ADHD. In addition to precision-related parameters derived from behavior, we measured the observers&rsquo; fixational eye movements, developed a new algorithm to detect microsaccades and explored their possible role as an oculomotor correlate of precision. </p><p> In a first investigation, we built upon a paradigm designed to increase the chances of probing divided attention. Specifically, we extended a visual search task with heterogenous distractors and explored the effects on performance of set size, task&mdash;detection and localization, time (perception and memory) and space. An optimal observer model with a variable precision encoding stage and an optimal decision rule was able to capture behavior in a task more naturalistic than target detection, namely target localization. Performance decreased with the set size of the search array for both detection and localization; so did precision. As expected, precision was higher in the perception condition relative to the memory condition. We found the same pattern of results with visual search arrays with reduced stimulus spacing; observers achieved comparable precision parameters, albeit with increased reaction times. </p><p> The nature of the attentional impairment in ADHD has been elusive. By using a new task that combines visuo-spatial orienting with feature dimension switch between orientation and color, we found an increased perceptual variability parameter in the ADHD group, which was correlated with an executive control metric. A classifier based on perceptual variability yielded high diagnosis accuracy. These results suggest that using basic psychophysical paradigms to capture encoding precision of low-level features deserves further study in ADHD, especially in conjunction with attention and executive function. </p><p> Measures of covert attention have included aspects of fixational eye movements, especially microsaccades. Inferences about the roles of microsaccades in perception and cognition depend on accurate detection algorithms. By using a new hidden semi-Markov model to capture sequences of microsaccades amongst drift and an inference algorithm based on this model, we found that microsaccades were more robustly detected under high measurement noise from the eye tracker. Applying this algorithm to the eye movement traces of ADHD and Control participants, we found a correlation between post-stimulus microsaccade rate and the perceptual variability parameter, suggesting a potential oculomotor mechanism for the less precise perceptual encoding in ADHD. </p><p> We conclude that by using and developing variants of visual attention paradigms, psychophysical models and oculomotor measurements, we can enhance our understanding about the brain processes in health and disease.</p><p>
272

Leveraging Deep Neural Networks to Study Human Cognition

Peterson, Joshua C. 21 November 2018 (has links)
<p> The majority of computational theories of inductive processes in psychology derive from small-scale experiments with simple stimuli that are easy to represent. However, real-world stimuli are complex, hard to represent efficiently, and likely require very different cognitive strategies to cope with. Indeed, the difficulty of such tasks are part of what make humans so impressive, yet methodological resources for modeling their solutions are limited. This presents a fundamental challenge to the precision of psychology as a science, especially if traditional laboratory methods fail to generalize. Recently, a number of computationally tractable, data-driven methods such as deep neural networks have emerged in machine learning for deriving useful representations of complex perceptual stimuli, but they are explicitly optimized in service to engineering objectives rather than modeling human cognition. It has remained unclear to what extent engineering models, while often state-of-the-art in terms of human-level task performance, can be leveraged to model, predict, and understand humans.</p><p> In the following, I outline a methodology by which psychological research can confidently leverage representations learned by deep neural networks to model and predict complex human behavior, potentially extending the scope of the field. In Chapter 1, I discuss the challenges to ecological validity in the laboratory that may be partially circumvented by technological advances and trends in machine learning, and weigh the advantages and disadvantages of bootstrapping from largely uninterpretable models. In Chapter 2, I contrast methods from psychology and machine learning for representing complex stimuli like images. Chapter 3 provides a first case study of applying deep neural networks to predict whether objects in a large database of images will be remembered by humans. Chapter 4 provides the central argument for using representations from deep neural networks as proxies for human psychological representations in general. To do this, I establish and demonstrate methods for quantifying their correspondence, improving their correspondence with minimal cost, and applying the result to the modeling of downstream cognitive processes. Building on this, Chapter 5 develops a method for modeling human subjective probability over deep representations in order to capture multimodal mental visual concepts such as "landscape". Finally, in Chapter 6, I discuss the implications of the overall paradigm espoused in the current work, along with the most crucial challenges ahead and potential ways forward. The overall endeavor is almost certainly a stepping stone to methods that may look very different in the near future, as the gains in leveraging machine learning methods are consolidated and made more interpretable/useful. The hope is that a synergy can be formed between the two fields, each bootstrapping and learning from the other.</p><p>
273

Uncovering Human Visual Priors

Langlois, Thomas A. 21 November 2018 (has links)
<p> Visual perception can be understood as an inferential process that combines noisy sensory information with internalized knowledge drawn from previous experience. In statistical Bayesian terms, internal representations of the visual environment can be understood as posterior estimates obtained by weighting imperfect sensory information (a likelihood) by internalized biases (a prior). Given limited perceptual resources, it is advantageous for the visual system to capitalize on predictable regularities of the visual world, and internalize them in the form of priors. This dissertation presents novel findings in the domain of spatial vision and visual memory, as well as some new work on memory for the 3D orientation of objects. In all cases, an unprecedented signal-to-noise ratio, achieved by employing serial reproduction chains (a &ldquo;telephone game&rdquo; procedure) combined with non-parametric kernel density estimation techniques, reveals a number of stunning intricacies in the prior for the first time. Methodological implications, as well as implications for amending prior empirical findings and revisiting past theoretical explanations are discussed.</p><p>
274

Visual Search in Naturalistic Imagery

Schreifels, Dave J. 02 November 2018 (has links)
<p> Visual search has been extensively studied in the laboratory, yielding broad insights into how we search through and attend to the world around us. In order to know if these insights are valid, however, this research must not be confined to the sanitized imagery typically found within the lab. Comparatively little research has been conducted on visual search within naturalistic settings, and this gap must therefore be bridged in order to further our understanding of visual search. Based on the results of Experiment 1, Experiment 2 was conducted to test three common effects observed in previous studies of visual search: the effects of background complexity, target-background similarity, and target-distractor similarity on response time. Results show that these hypotheses carry over to the natural world, but also that there are other effects present not accounted for by current theories of visual search. The argument is made for the modification of these theories to incorporate this naturalistic information. </p><p>
275

A Model of Relational Reasoning through Selective Attention

Chapman, G. William, IV. 21 September 2018 (has links)
<p> Understanding the relationship between sets of objects is a fundamental requirement of cognitive skills, such as learning from example or generalization. For example, recognizing that planets revolve around stars, and not the other way around, is essential for understanding astronomical systems. However, the method by which we recognize and apply such relations is not clearly understood. In particular, how a set of neurons is able to represent which object fulfills which role (role binding), presented difficulty in past studies. Here, we propose a systems-level model, which utilizes selective attention and working memory, to address issues of role binding. In our model, selective attention is used to perceive visual stimuli such that all relations can be reframed as an operation from one object unto another, and so binding becomes an issue only in the initial recognition of the direction of the relation. We test and refine this model, utilizing EEG during a second-order relational reasoning task. Epoched EEG was projected to the cortical surface, providing sourcespace estimates of event related potentials. Permutation testing revealed 8 cortical clusters which responded differentially based on the specifics of a trial. Dynamic connectivity between these clusters was estimated with the directed transfer function, to reveal the dynamic causality between regions. Our results support the model, identifying a distinct bottom-up network that identifies relations between single pairs of objects, along with a top-down biasing network that may reorient attention to sequential pairs of objects. Taken together, our results show that relational reasoning can be performed by a distributed network, utilizing selective attention</p><p>
276

Joint Action Enhances Motor Learning

January 2015 (has links)
abstract: ABSTRACT Learning a novel motor pattern through imitation of the skilled performance of an expert has been shown to result in better learning outcomes relative to observational or physical practice. The aim of the present project was to examine if the advantages of imitational practice could be further augmented through a supplementary technique derived from my previous research. This research has provided converging behavioral evidence that dyads engaged in joint action in a familiar task requiring spatial and temporal synchrony end up developing an extended overlap in their body representations, termed a joint body schema (JBS). The present research examined if inducing a JBS between a trainer and a novice trainee, prior to having the dyad engage in imitation practice on a novel motor pattern would enhance both of the training process and its outcomes. Participants either worked with their trainer on a familiar joint task to develop the JBS (Joint condition) or performed a solo equivalent of the task while being watched by their trainer (Solo condition). Participants In both groups then engaged in blocks of alternating imitation practice and free production of a novel manual motor pattern, while their motor output was recorded. Analyses indicated that the Joint participants outperformed the Solo participants in the ability to synchronize the spatial and temporal components of their imitation movements with the trainer’s pattern-modeling movements. The same group showed superior performance when attempting to freely produce the pattern. These results carry significant theoretical and translational potentials for the fields of motor learning and rehabilitation. / Dissertation/Thesis / Doctoral Dissertation Psychology 2015
277

Multiscale Interactions in Psychological Systems

January 2016 (has links)
abstract: For many years now, researchers have documented evidence of fractal scaling in psychological time series. Explanations of fractal scaling have come from many sources but those that have gained the most traction in the literature are theories that suggest fractal scaling originates from the interactions among the multiple scales that make up behavior. Those theories, originating in the study of dynamical systems, suffer from the limitation that fractal analysis reveals only indirect evidence of multiscale interactions. Multiscale interactions must be demonstrated directly because there are many means to generate fractal properties. In two experiments, participants performed a pursuit tracking task while I recorded multiple behavioral and physiological time series. A new analytical technique, multiscale lagged regression, was introduced to capture how those many psychological time series coordinate across multiple scales and time. The results were surprising in that coordination among psychological time series tends to be oscillatory in nature, even when the series are not oscillatory themselves. Those and other results demonstrate the existence of multiscale interactions in psychological systems. / Dissertation/Thesis / Doctoral Dissertation Psychology 2016
278

Design and Evaluation of Auditory-Supported Air Gesture Controls in Vehicles

Sterkenburg, Jason 05 June 2018 (has links)
<p> The number of visual distraction-caused crashes highlights a need for non-visual information displays in vehicles. Auditory-supported air gesture controls could fill that need. This dissertation covers four experiments that aim to explore the design auditory-supported air gesture system and examine its real-world influence on driving performance. The first three experiments compared different prototype gesture control designs as participants used the systems in a driving simulator. The fourth experiment sought to answer more basic questions about how auditory displays influence performance in target acquisition tasks. Results from experiment 1 offered optimism for the potential of auditory-supported displays for navigating simple menus by showing a decrease in off-road glance time compared to visual-only displays. Experiment 1 also showed a need to keep menu items small in number but large in size. Results from experiment 2 showed auditory-supported air gesture controls can result in safer driving performance relative to touchscreens, but at the cost of slight decrements in menu task performance. Results from experiment 3 showed that drivers can navigate through simple menu structures totally eyes-free, with no visual displays, even with less effort compared to visual displays and visual plus auditory displays. Experiment 4 showed that auditory displays convey information and allow for accurate target selection, but result in slower selections and relatively less accurate selections compared to displays with visual information, especially for more difficult target selections. Overall, the experimental data highlight potential for auditory-supported air gesture controls for increasing eyes-on-road time relative to visual displays both in touchscreens and air gesture controls. However, this benefit came at a slight cost to target selection performance as participants generally took longer to process auditory information in simple target acquisition tasks. Experimental results are discussed in the context of multiple resource theory and Fitts&rsquo;s law. Design guidelines and future work are also discussed. </p><p>
279

Sensory-Motor Mechanisms Unify Psychology: Motor Effort and Perceived Distance to Cultural Out-Groups

January 2013 (has links)
abstract: ABSTRACT This thesis proposes that a focus on the bodily level of analysis can unify explanation of behavior in cognitive, social, and cultural psychology. To examine this unifying proposal, a sensorimotor mechanism with reliable explanatory power in cognitive and social psychology was used to predict a novel pattern of behavior in cultural context, and these predictions were examined in three experiments. Specifically, the finding that people judge objects that require more motor effort to interact with as farther in visual space was adapted to predict that people with interdependent self-construal(SC) , relative to those with independent SC, would visually perceive their cultural outgroups as farther relative to their cultural in-groups. Justifying this cultural extension of what is primarily a cognitive mechanism is the assumption that, unlike independents, Interdependents interact almost exclusively with in-group members, and hence there sensorimotor system is less tuned to cross-cultural interactions. Thus, interdependents, more so than independents, expect looming cross-cultural interactions to be effortful, which may inflate their judgment of distance to the out-groups. Two experiments confirmed these predictions: a) interdependent Americans, compared to independent Americans, perceived American confederates (in-group) as visually closer; b) interdependent Arabs, compared to independent Arabs, perceived Arab confederates (in-group) as closer; and c) interdependent Americans, relative to independent Americans, perceived Arab confederates (out-group) as farther. A third study directly established the proposed relation between motor effort and distance to human targets: American men perceived other American men as closer after an easy interaction than after a more difficult interaction. Together, these results demonstrate that one and the same sensorimotor mechanism can explain/predict homologous behavioral patterns across the subdisciplines of psychology. / Dissertation/Thesis / M.A. Psychology 2013
280

Neural Mechanisms of Conceptual Relations

Lewis, Gwyneth A. 22 March 2017 (has links)
<p> An over-arching goal in neurolinguistic research is to characterize the neural bases of semantic representation. A particularly relevant goal concerns whether we represent features and events (a) together in a generalized semantic hub or (b) separately in distinct but complementary systems. While the left anterior temporal lobe (ATL) is strongly emphasized in representing both feature-based (taxonomic) knowledge and event-based (thematic) knowledge, recent evidence suggests that the temporal parietal junction (TPJ) plays a unique role in thematic semantics. The primary goal of this dissertation was to identify and characterize the neural mechanisms that support taxonomic and thematic semantics, and the general goal was to shed further light on neural stages of word comprehension. We conducted two magnetoencephalography (MEG) experiments to identify neural indices of visual representations (Chapter 1) and to examine ATL vs. TPJ involvement in taxonomic and thematic semantics (Chapter 2), respectively. We also conducted a functional magnetic resonance imaging (fMRI) experiment to characterize the role of the TPJ in thematic inhibition vs. thematic semantics (Chapter 3). The three experiments employed semantic judgment tasks, equated stimulus conditions on linguistic and psycholinguistic variables, and supplemented analyses with continuous variables as more sensitive hallmarks of lexical access. </p><p> Chapter 1 demonstrated that initial stages of spoken word recognition involve contact with visual representations of features associated with the real-world referent. The early timing of the effect suggests that sensory aspects of meaning are not necessarily a product of lexical activation during speech recognition. Chapter 2 demonstrated ATL selectivity for taxonomic relations, and moderate TPJ selectivity for both taxonomic and thematic relations. Results for the TPJ could reflect either inhibition of irrelevant information or conceptual processing. Chapter 3 tested these possibilities by requiring inhibition of the opposite relation in two semantic judgment tasks. Results of this experiment indicate that the TPJ plays a role both in thematic semantics <i> and</i> in inhibitory processing when the conceptual computation requires it. </p><p> In sum, this dissertation focuses on topics pertaining to neural encoding of words with respect to form and meaning. Across three neurolinguistic experiments, we addressed (1) contributions of visual representations during lexical access, (2) ATL and TPJ selectivity for thematic vs. taxonomic concepts, and (3) TPJ inhibition vs. specialization for thematic concepts, respectively.</p><p>

Page generated in 0.062 seconds