• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 155
  • 1
  • Tagged with
  • 159
  • 159
  • 33
  • 33
  • 21
  • 18
  • 17
  • 17
  • 16
  • 16
  • 16
  • 16
  • 14
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Hierarchical error processing during motor control

Krigolson, Olave 26 September 2007 (has links)
The successful execution of goal-directed movement requires the evaluation of many levels of errors. On one hand, the motor system needs to be able to evaluate ‘high-level’ errors indicating the success or failure of a given movement. On the other hand, as a movement is executed the motor system also has to be able to correct for ‘low-level’ errors - an error in the initial motor command or change in the motor command necessary to compensate for an unexpected change in the movement environment. The goal of the present research was to provide electroencephalographic evidence that error processing during motor control is evaluated hierarchically. The present research demonstrated that high-level motor errors indicating the failure of a system goal elicited the error-related negativity, a component of the event-related brain potential (ERP) evoked by incorrect responses and error feedback. The present research also demonstrated that low-level motor errors are associated with parietally distributed ERP component related to the focusing of visuo-spatial attention and context-updating. Finally, the present research includes a viable neural model for hierarchical error processing during motor control.
152

Psychometric evaluation of the Twelve Elements Test and other commonly used measures of executive function

Sira, Claire Surinder 29 November 2007 (has links)
Objective: The Six Elements Task (SET; Shallice and Burgess, 1991; Burgess et al., 1996) measures examinees’ ability to plan and organize their behaviour, form strategies for novel problem solving, and self-monitor. The task has adequate specificity (Wilson et al., 1996), but questionable sensitivity to mild impairments in executive function (Jelicic, et al., 2001). The SET is vulnerable to practice effects. There is a limited range in possible scores, and ceiling effects are observed. This dissertation sought to evaluate the validity and clinical utility of a modification of the SET by increasing the difficulty of the test, and expanding the range of possible scores in order to make it more suitable for serial assessments. Participants and Methods: The sample included 26 individuals with mixed acquired brain injury, and 26 healthy matched controls (20 – 65 years). Participants completed a battery of neuropsychological tests on two occasions eight weeks apart. To control for confounding variables in executive function test performance, measures of memory, working memory, intelligence, substance abuse, pain, mood and personality were included. Self and informant reports of executive dysfunction were also completed. The two groups’ performances on the various measures were compared, and the external validity of the 12ET was examined. In addition, normative data and information for reliable change calculations were tabulated. Results: The ABI group exhibited very mild executive function deficits on established measures. The matched control group attempted more tasks on the 12ET, but the difference was non significant. Neither group tended to break the rule of the task. The 12ET showed convergent validity with significant correlations with measures of cognitive flexibility (Trailmaking B and Ruff Figural Fluency), and a measure of planning (Tower of London). The 12ET and published measures were also significantly correlated with intelligence in the brain-injured group. The 12ET did not show divergent validity with a test of visual scanning speed (Trailmaking A). No demographic variables were found to be significant predictors of 12ET performance at Time 2 over and above performance at Time 1, and both participant groups obtained the same benefit from practice. The 12ET did not suffer from ceiling effects on the second administration, and the test-retest reliability of the 12ET variables ranged from low (r = .22 for Rule Breaks in the brain-injured group) to high (r = .78 for Number of Tasks Attempted in the control group). Conclusions: Despite their (often severe) brain injuries, this sample of brain injured participants did not demonstrate executive impairments on many published tests and their scores were not significantly different from the control group’s scores. Therefore, it was not possible to determine if the 12ET was a more sensitive measure of mild executive deficits than the SET. However, the increase in range did reduce the tendency for participants to perform at ceiling levels. The 12ET showed a number of significant correlations with other executive measures, particularly for the brain-injured group, though these correlations may have been moderated by general intelligence. Two variables of the 12ET, deviation from the optimal amount of time per task and Number of Tasks Completed, showed promise as measures of reliable change in this sample over an 8-week interval.
153

Diagnosing dementia with cognitive tests: are demographic corrections useful?

O'Connell, Megan Eleine 02 January 2008 (has links)
Diagnostic biases against individuals of advanced age or few years of formal education exist because age and education are commonly related to performance on cognitive tests, thus, demographic corrections for these tests are used. Corrections are complicated, however, by an association between demographic variables and dementia diagnoses. This dissertation examined the dementia diagnostic accuracy of demographic corrections for cognitive tests. Experiment I tested whether, in the context of skewed tests that violate the statistical assumptions of linearity and homoscedasticity, the accuracy of demographically-corrected test scores would be reduced. Experiment II tested whether demographic corrections would only be appropriate for biased factors instead of the total score for multifactorial tests. Experiment III explored whether demographic corrections would be inappropriate under conditions where the dementia pathology overrides the association between cognitive test scores and demographic variables. Experiment IV explored whether demographic corrections would be inappropriate in conditions where the demographic variables were, in themselves, risk factors for dementia, as this would remove predictive variance. Experiment V explored aspects particular to regression-based demographic corrections that might adversely affect diagnostic accuracy. Experiments I to V were simulation-based; consequently Experiment VI explored replication of these findings using regression adjusted scores in a previously collected clinical database. Finally, Experiment VII used clinical data in conjunction with published clinical normative data with demographic-stratification to test the generalizability of these findings to clinical practice. Using area under the receiver operating characteristic curve comparisons, the use of demographically-corrected scores repeatedly failed to improve upon the dementia diagnostic accuracy of uncorrected cognitive test scores, regardless of whether these corrections were regression-based or based on demographically stratified normative data. Demographic corrections reduced dementia diagnostic accuracy when cognitive test scores were skewed or when adjustments were regression-based and demographic variables were risk factors for dementia. The use of demographic corrections when dementia pathology supersedes any association between cognitive test scores and demographic variables does not impact the relative diagnostic accuracy of demographically-corrected versus uncorrected test scores. Overall, these results suggest that the use of demographic corrections for cognitive test scores is highly cautioned when the goal is to maximize dementia diagnostic accuracy.
154

Reinforcement learning in children and adolescents with Fetal Alcohol Spectrum Disorder (FASD)

Engle, Jennifer Aileen 24 July 2009 (has links)
Objective: This study examined various dimensions of reinforcement learning in children with Fetal Alcohol Spectrum Disorder (FASD). Specific investigations included (1) speed of learning from reinforcement; (2) impact of concreteness of the reinforcer; (3) comparison of response to two types of shifts in reinforcement; and (4) relationship of reinforcement learning to parent reported social and behavioral functioning. Participants & Methods: Participants included 19 children with FASD without an intellectual disability, ages 11 to 17, and 19 age- and sex-matched Control participants (11 male, 8 female per group). Each participant completed two novel visual reinforcement learning discrimination tasks (counterbalanced), each administered twice. The first task involved categorical learning followed by either a reversal or a nonreversal shift. The second task involved a computerized probabilistic paradigm (70% contingent feedback) administered using either tokens or points, redeemable for a prize. Parents completed a history questionnaire, the Children’s Learning Questionnaire (McInerney, 2007), and the Child Behavior Checklist (Achenbach & Rescorla, 2001). Results: The Control group demonstrated significantly stronger probabilistic reinforcement learning, although the groups showed similar rates of between-condition improvement (learning savings). Furthermore, the concreteness of the reinforcer (tokens vs. points) made no significant difference in learning characteristics for either group. In contrast to probabilistic reinforcement learning, there were no significant group differences in categorical discrimination or shift learning. The FASD group demonstrated the age-appropriate pattern of reversals faster than nonreversals, while there was no difference between the two types of shifts in the Control group. A priori identified parent reports were not significantly correlated with task performance when each group was examined separately. Conclusions: There was no support for the hypothesis that reinforcement learning occurs in a functionally different manner in children with FASD. Rather, reinforcement learning may take longer, paralleling the generally slower speed of all learning in these children, and be more dependent on recent information. This suggests that children with FASD without intellectual disability are able to learn from reinforcement if given sufficient consistent repetition. However, failure of reinforcement learning may occur for a variety of reasons not addressed in this study, including difficulty with transfer of learning or impulsivity.
155

A formal investigation of dopamine’s role in Attention-Deficit/Hyperactive Disorder: evidence for asymmetrically effective reinforcement learning signals

Cockburn, Jeffrey 14 January 2010 (has links)
Attention-Deficit/Hyperactive Disorder is a well studied but poorly understood disorder. Given that the underlying neurological mechanisms involved in the disorder have yet to be established, diagnosis is dependent upon behavioural markers. However, recent research has begun to associate a dopamine system dysfunction with ADHD; though, consensus on the nature of dopamine’s role in ADHD has yet to be established. Here, I use a computational modelling approach to investigate two opposing theories of the dopaminergic dysfunction in ADHD. The hyper-active dopamine theory posits that ADHD is associated with a midbrain dopamine system that produces abnormally large prediction errors signals; whereas the dynamic developmental theory argues that abnormally small prediction errors give rise to ADHD. Given that these two theories center on the size of prediction errors encoded by the midbrain dopamine system, I have formally investigated the implications of each theory within the framework of temporal-difference learning, a reinforcement learning algorithm demonstrated to model midbrain dopamine activity. The results presented in this thesis suggest that neither theory provides a good account for the behaviour of children and animal models of ADHD. Instead, my results suggest ADHD is the result of asymmetrically effective reinforcement learning signals encoded by the midbrain dopamine system. More specifically, the model presented here reproduced behaviours associated with ADHD when positive prediction errors were more effective than negative prediction errors. The biological sources of this asymmetry are considered, as are other computational models of ADHD.
156

Neural mechanisms of cognitive control and reward learning in children with Attention Deficit Hyperactivity Disorder

Lukie, Carmen Noel 30 August 2010 (has links)
A substantial amount of behavioural, genetic, and neurophysiological data suggest that Attention Deficit Hyperactivity Disorder (ADHD) is influenced by an underlying abnormality in the midbrain dopamine system. A previous study found that children with ADHD are unusually sensitive to the salience of rewards, mediated in part by the dopamine system (Holroyd, Baker, Kerns & Mueller, 2008). The current study aimed to replicate and expand upon the previous finding using event-related potentials (ERP) recorded from typically developing children and children with ADHD as they navigated a “virtual T-Maze” in two conditions differing on reward saliency. Children also completed a behavioural task designed to measure decision making and sensitivity to reward and punishment. Both groups of children responded to the behavioural task in a way that is indicative of increased sensitivity to reward. Unlike the previous study, the salience of reward as reflected in the ERP did not have an effect on either children with ADHD or typically developing children. However, both groups displayed a larger error-related negativity (ERN) in the condition presented second.
157

Hierarchical error processing during motor control

Krigolson, Olave 26 September 2007 (has links)
The successful execution of goal-directed movement requires the evaluation of many levels of errors. On one hand, the motor system needs to be able to evaluate ‘high-level’ errors indicating the success or failure of a given movement. On the other hand, as a movement is executed the motor system also has to be able to correct for ‘low-level’ errors - an error in the initial motor command or change in the motor command necessary to compensate for an unexpected change in the movement environment. The goal of the present research was to provide electroencephalographic evidence that error processing during motor control is evaluated hierarchically. The present research demonstrated that high-level motor errors indicating the failure of a system goal elicited the error-related negativity, a component of the event-related brain potential (ERP) evoked by incorrect responses and error feedback. The present research also demonstrated that low-level motor errors are associated with parietally distributed ERP component related to the focusing of visuo-spatial attention and context-updating. Finally, the present research includes a viable neural model for hierarchical error processing during motor control.
158

Diagnosing dementia with cognitive tests: are demographic corrections useful?

O'Connell, Megan Eleine 02 January 2008 (has links)
Diagnostic biases against individuals of advanced age or few years of formal education exist because age and education are commonly related to performance on cognitive tests, thus, demographic corrections for these tests are used. Corrections are complicated, however, by an association between demographic variables and dementia diagnoses. This dissertation examined the dementia diagnostic accuracy of demographic corrections for cognitive tests. Experiment I tested whether, in the context of skewed tests that violate the statistical assumptions of linearity and homoscedasticity, the accuracy of demographically-corrected test scores would be reduced. Experiment II tested whether demographic corrections would only be appropriate for biased factors instead of the total score for multifactorial tests. Experiment III explored whether demographic corrections would be inappropriate under conditions where the dementia pathology overrides the association between cognitive test scores and demographic variables. Experiment IV explored whether demographic corrections would be inappropriate in conditions where the demographic variables were, in themselves, risk factors for dementia, as this would remove predictive variance. Experiment V explored aspects particular to regression-based demographic corrections that might adversely affect diagnostic accuracy. Experiments I to V were simulation-based; consequently Experiment VI explored replication of these findings using regression adjusted scores in a previously collected clinical database. Finally, Experiment VII used clinical data in conjunction with published clinical normative data with demographic-stratification to test the generalizability of these findings to clinical practice. Using area under the receiver operating characteristic curve comparisons, the use of demographically-corrected scores repeatedly failed to improve upon the dementia diagnostic accuracy of uncorrected cognitive test scores, regardless of whether these corrections were regression-based or based on demographically stratified normative data. Demographic corrections reduced dementia diagnostic accuracy when cognitive test scores were skewed or when adjustments were regression-based and demographic variables were risk factors for dementia. The use of demographic corrections when dementia pathology supersedes any association between cognitive test scores and demographic variables does not impact the relative diagnostic accuracy of demographically-corrected versus uncorrected test scores. Overall, these results suggest that the use of demographic corrections for cognitive test scores is highly cautioned when the goal is to maximize dementia diagnostic accuracy.
159

Disruptive game design : a commercial design and development methodology for supporting player cognitive engagement in digital games

Howell, Peter Mark January 2015 (has links)
First-person games often support the player’s gradual accretion of knowledge of the game’s rules during gameplay. They thus focus on challenging and developing performative skills, which in turn supports the player in attaining feelings of achievement and skills mastery. However, an alternative disruptive game design approach is proposed as an approach that encourages players to engage in higher-order thinking, in addition to performative challenges. This requires players to cognitively engage with the game at a deeper level. This stems from the player’s expectations of game rules and behaviours being disrupted, rather than supported, requiring players to learn and re-learn the game rules as they play. This disruptive approach to design aims to support players in satiating their needs for not only achievement and mastery at a performative level but also, their needs for problem-solving and creativity. Utilising a Research through Design methodology, a model of game space proposes different stages of a game’s creation, from conceptualisation through to the final player experience. The Ludic Action Model (LAM), developed from existing game studies and cognitive psychological theory, affords an understanding of how the player forms expectations in the game as played. A conceptual framework of game components is then constructed and mapped to the Ludic Action Model, providing a basis for understanding how different components of a game interact with and influence the player’s cognitive and motor processes. The Ludic Action Model and the conceptual framework of game components are used to construct the Disruptive Game Feature Design and Development (DisDev) model, created as a design tool for ‘disruptive’ games. The disruptive game design approach is then applied to the design, development, and publication of a commercial game, Amnesia: A Machine for Pigs (The Chinese Room, 2013). This application demonstrated the suitability of the design approach, and the proposed models, for establishing disruptive game features in the game as designed, developing those features in the game as created, to the final resolution in the game as published, which the player will then experience in the game as played. A phenomenological template analysis of online player discussions of the game shows that players tend to evaluate their personal game as played (i.e. their personal play experience) in relation to their a priori game as expected (i.e. the experience that they expected the game to provide). Players reported their play experiences in ways that suggested they had experienced cognitive engagement and higher-order thinking. However, player attitudes towards this type of play experience were highly polarised and seemingly dependent on the correspondence between actual and expected play experiences. The discussion also showed that different methods of disruption have a variable effect on the player experience depending on the primacy of the game feature being disrupted. Primary features are more effectively disrupted when the game’s responses to established player actions are subsequently altered. Secondary game features, only present in some sections, are most effectively disrupted when their initially contextualised behaviour is subsequently altered, or recontextualised. In addition, story-based feature disruption is most effected when the initial encoding stage is ambiguous, thus disrupting players’ attempts to form an initial understanding of them. However, these different methods of disruption may be most effective when used in conjunction with each other.

Page generated in 0.0922 seconds