Spelling suggestions: "subject:"neurosciences"" "subject:"neuroscience’s""
951 |
From lab to clinic: the practicality of using event related potentials in the diagnosis of Alzheimer's diseaseSuh, Cheongmin 13 July 2017 (has links)
The main objective of this study was to investigate whether event related potentials (ERPs) can be used as a biomarker of disease severity staging in Alzheimer’s disease (AD) within a heterogeneous group of patients presenting to a memory disorders clinic for initial evaluation. Based on the known progression of AD pathology, we hypothesized ERP components would be abnormal, commensurate with disease severity in mild cognitive impairment (MCI) due to AD, mild, and moderate to severe dementia due to AD. ERP components were predicted based on the known sites of their neural generators. ERP peaks measured during an auditory oddball paradigm from twenty-two AD (n=9) and non-AD (n=13) patients were compared to their clinical outcomes using multivariate ANCOVA controlling for age with Bonferroni corrections. The predictive abilities of significant ERP components were examined using a binary logistic regression model. Significant between-group effects were found in N100 distractor amplitude, F(2, 12) = 6.062, p = .015, ηp2 = .503. The results supported our hypothesis that N100 amplitude would be increased in AD, suggesting that sensory gating may be more impaired in mild AD than in non-AD related cognitive impairment.
|
952 |
The prognostic value of biomarkers in the evaluation of glioblastoma multiformeGascon, Marc-Andre 01 November 2017 (has links)
BACKGROUND: Glioblastoma multiforme (GBM) is a highly heterogeneous tumor of the central nervous system (CNS) that exhibits considerable variation in its clinical course. Recently, the World Health Organization (WHO) published a classification system for tumors of the CNS that combines histological features with molecular parameters to determine tumor grade. The incorporation of molecular biomarkers that carry both prognostic and predictive value adds another level of objectivity to the glioma grading system and will help guide clinical decision. As such, the assessment of biomarkers has become an integral part of tumor evaluation in neuro-oncology. This curriculum will discuss the clinical relevance of the most recently studied biomarkers with prognostic and predictive value in the evaluation of GBM. Biomarkers regularly used for the assessment of GBM include the IDH mutations, MGMT methylation status, and EGFRvIII. Furthermore, this review will offer a perspective on experimental approaches currently under investigation for treatment of GBM.
LITERATURE REVIEW FINDINGS: MGMT methylation of the promoter region is associated with better treatment response from temozolomide (TMZ), an alkylating therapeutic. Treatment benefit was most prominent in the elderly population and therapy should be individualized for that age group. Patients with GBM characterized by IDH1/IDH2 mutations carry a better overall prognosis primarily due to their higher sensitivity to chemo- and radiotherapy. The prognostic value of EGFRvIII remains controversial, although it may be associated with a worse prognosis. Nonetheless, EGFRvIII provides an ideal target for targeted molecular therapies as it is only found on tumor cells.
PROPOSED METHODS: A curriculum aimed at educating primary care providers (PCPs) about the most clinically significant biomarkers in GBM will be developed. The curriculum will be in a PowerPoint format, and the hour-long lecture will be presented at continuing medical education national conferences. A pre- and post-test consisting of the same 10 multiple- choice questions will be administered on a voluntary basis to help evaluate knowledge acquisition from the curriculum. Results will be evaluated with a paired t-test analysis. The tests will be will be administered through Poll Everywhere, a smartphone survey application.
CONCLUSION: There is increasing evidence to suggest that therapies should be individualized according to specific biomarkers with predictive value. PCPs are in a position where they are often the first providers to suspect the diagnosis of a brain tumor. Therefore, it is imperative for PCPs to be aware of the latest development in the field of neuro-oncology so that they may appropriately counsel patients.
|
953 |
Maternal antibodies in autism: what is known and future directionsBhanot, Anisha 03 July 2018 (has links)
Autism spectrum disorder (ASD) refers to a highly prevalent neuropsychiatric disorder, currently affecting one in every 68 children. ASD is understood as a heterogeneous disorder, and individuals with this condition vary considerably in terms of symptom presentation. This heterogeneity contributes to the difficulty faced by researchers and clinicians in trying to determine the precise underlying mechanisms and treatment for this condition. Furthermore, it remains unknown whether the variations in symptom manifestation are attributed to differences in underlying etiologies of the disorder or other factors as yet to be identified.
Currently, it is believed that ASD is likely due to the interaction between different genetic and environmental factors. The maternal immune system is one example of where the environment may act upon genetic predispositions and lead to altered fetal brain development. Considering the importance of the immune environment during fetal development, maternal antibodies (Abs) directed against fetal proteins have been considered as potentially playing a critical function in the pathology of ASD.
This thesis examines the literature focused on the role of maternal Abs in fetal development and their impact on the neuropathology of ASD. Studies have collected samples from mothers of children diagnosed with ASD and examined the reactivity patterns of the maternal Abs against fetal proteins. Through review and inspection of methodologies and results, this thesis highlights the important insights obtained as well as proposes possible reasons for the disparity in findings. Lastly, this thesis proposes future directions and therapeutic implications of identifying the maternal Abs that could be involved in at least a subset of ASD cases.
|
954 |
An Assessment Tool for Participant Groupings for Human Neuroimaging Research| Measuring Musical TrainingShaw, Catheryn R. 03 July 2018 (has links)
<p> The purpose of this study was to develop an assessment tool to measure musical training and experiences for grouping participants in human neuroimaging research studies. To fulfill the purpose of this study, the researcher: 1. Completed a comprehensive review of the research literature to establish the essential content of the assessment tool; 2. Developed an assessment tool to survey subjects about their musical training and experiences; 3. Pilot tested the assessment tool, and revised the tool according to the preliminary analyses of the validity, reliability, and usefulness of the assessment tool; 4. Established the content validity and reliability of the assessment tool with subjects participating in a neuroimaging study designed to analyze the influences of musical training and experiences on brain structures and functions, and 5. Determined if the assessment tool function effectively in the selection and grouping of musically and musically untrained subjects for neuroimaging studies. </p><p> The assessment tool was administered to a purposive sample (<i> N</i> = 42) in the southeastern region of the United States. Participants were recruited on the basis of musical training, both the existence and lack thereof. The assessment was completed via the web-based platform, Qualtrics. Coding of survey responses indicated differences in the participant pool that resulted in two groups: Musicians and Non-musicians. Further investigation yielded two subgroups within the Musician participant group: Moderate and Advanced. </p><p> Validity of the assessment tool was established using a three-step construction process, (a) development of a draft based on the existing literature and the musical training knowledge of the researcher, (b) a review of the assessment tool by five music educators and performers, and (c) administration to a pilot group of five additional people with varying levels of musicianship. Additional content validity was completed by external reviewers by rating each assessment item using a Likert-type scale: 1–<i>Not important </i>, 2–<i>Slightly important</i>, 3–<i>Fairly important</i>, 4–<i>Important</i>, and 5–<i> Very important</i>. Reliability was established using interrater reliability and was determined to be 88.9%. </p><p> A discussion was presented that included the differences among participants that made their musical training and experiences unique compared with other participants. Implications were discussed regarding the usage possibilities for the survey, as well as the potential effects of the survey on human neuroimaging research.</p><p>
|
955 |
Hierarchically Normalized Models of Visual Distortion Sensitivity Physiology, Perception, and ApplicationBerardino, Alexander 01 August 2018 (has links)
<p> How does the visual system determine when changes to an image are unnatural (image distortions), how does it weight different types of distortions, and where are these computations carried out in the brain? These questions have plagued neuroscientists, psychologists, and engineers alike for several decades. Different academic communities have approached the problem from different directions, with varying degrees of success. The one thing that all groups agree on is that there is value in knowing the answer to the question. Models that appropriately capture human sensitivity to image distortions can be used as a stand in for human observers in order to optimize any algorithm in which fidelity to human perception is necessary (i.e. image and video compression). </p><p> In this thesis, we approach the problem by building models informed and constrained by both visual physiology, and the statistics of natural images, and train them to match human psychophysical judgments about image distortions. We then develop a novel synthesis method that forces the models to make testable predictions, and quantify the quality of those predictions with human psychophysics. Because our approach links physiology and perception, it allows us to pinpoint what elements of physiology are necessary to capture human sensitivity to image distortions. We consider several different models of the visual system, some developed from known neural physiology, and some inspired by recent breakthroughs in artificial intelligence (deep neural networks trained to recognize objects within images at human performance levels). We show that models inspired by early brain areas (retina and LGN) consistently capture human sensitivity to image distortions better than both the state of the art, and better than competing models of the visual system. We argue that divisive normalization, a ubiquitous computation in the visual system, is integral to correctly capturing human sensitivity. </p><p> After establishing that our models of the retina and the LGN outperform all other tested models, we develop a novel framework for optimally rendering images on any display for human observers. We show that a model of this kind can be used as a stand in for human observers within this optimization framework, and produces images that are better than other state of the art algorithms. We also show that other tested models fail as a stand in for human observers within this framework. </p><p> Finally, we propose and test a normative framework for thinking about human sensitivity to image distortions. In this framework, we hypothesize that the human visual system decomposes images into structural changes (those that change the identity of objects and scenes), and non-structural changes (those that preserve object and scene identity), and weights these changes differently. We test human sensitivity to distortions that fall into each of these categories, and use this data to identify potential weaknesses of our model that can be improved in further work.</p><p>
|
956 |
Beyond Bounded Rationality| Reverse-Engineering and Enhancing Human IntelligenceLieder, Falk 11 September 2018 (has links)
<p> Bad decisions can have devastating consequences, and there is a vast body of literature suggesting that human judgment and decision-making are riddled with numerous systematic violations of the rules of logic, probability theory, and expected utility theory. The discovery of these <i>cognitive biases</i> in the 1970s challenged the concept of Homo sapiens as the rational animal and has profoundly shaken the foundations of economics and rational models in the cognitive, neural, and social sciences. Four decades later, these disciplines still lack a rigorous theoretical foundation that can account for people’s cognitive biases. Furthermore, designing effective interventions to remedy cognitive biases and improve human judgment and decision-making is still an art rather than a science. I address these two fundamental problems in the first and the second part of my thesis respectively. </p><p> To develop a theoretical framework that can account for cognitive biases, I start from the assumption that human cognition is fundamentally constrained by limited time and the human brain’s finite computational resources. Based on this assumption, I redefine human rationality as reasoning and deciding according to cognitive strategies that make the best possible use of the mind’s limited resources. I draw on the bounded optimality framework developed in the artificial intelligence literature to translate this definition into a mathematically precise theory of bounded rationality called <i>resource-rationality </i> and a new paradigm for cognitive modeling called <i>resource-rational analysis</i>. Applying this methodology allowed me to derive resource-rational models of judgment and decisionmaking that accurately capture a wide range of cognitive biases, including the anchoring bias and the numerous availability biases in memory recall, judgment, and decision-making. By showing that these phenomena and the heuristics that generate them are consistent with the rational use of limited resources, my analysis provides a rational reinterpretation of cognitive biases that were once interpreted as hallmarks of human irrationality. This suggests that it is time to revisit the debate about human rationality with the more realistic normative standard of resource-rationality. To enable a systematic assessment of the extent to which human cognition is resource- rational, I present an automatic method for deriving resource-rational heuristics from a mathematical specification of their function and the mind’s computational constraints. Applying this method to multi-alternative risky-choice led to the discovery of a previously unknown heuristic that people appear to use very frequently. Evaluating human decision-making against resource-rational heuristics suggested that, on average, human decision-making is at most 88% as resource-rational as it could be. </p><p> Since people are equipped with multiple heuristics, a complete normative theory of bounded rationality also has to answer the question of when each of these heuristics should be used. I address this question with a rational theory of strategy selection. According to this theory, people gradually learn to select the heuristic with the best possible speed-accuracy trade-off by building a predictive model of its performance. Experiments testing this model confirmed that people gradually learn to make increasingly more rational use of their finite time and bounded cognitive resources through a metacognitive reinforcement learning mechanism. </p><p> Overall, these findings suggest that—contrary to the bleak picture painted by previous research on heuristics and biases—human cognition is not fundamentally irrational, and can be understood as making rational use of bounded cognitive resources. By reconciling rationality with cognitive biases and bounded resources, this line of research addresses fundamental problems of previous rational modeling frameworks, such as expected utility theory, logic, and probability theory. Resource-rationality might thus come to replace classical notions of rationality as a theoretical foundation for modeling human judgment and decision-making in economics, psychology, neuroscience, and other cognitive and social sciences. </p><p> In the second part of my dissertation, I apply the principle of resource-rationality to develop tools and interventions for improving the human mind. Early interventions educated people about cognitive biases and taught them the normative principles of logic, probability theory, and expected utility theory. The practical benefits of such interventions are limited because the computational demands of applying them to the complex problems people face in everyday life far exceed individuals’ cognitive capacities. Instead, the principle of resource-rationality suggests that people should rely on simple, computationally efficient heuristics that are well adapted to the structure of their environments. Building on this idea, I leverage the automatic strategy discovery method and insights into metacognitive learning from the first part of my dissertation to develop intelligent systems that teach people resource-rational cognitive strategies. I illustrate this approach by developing and evaluating a cognitive tutor that trains people to plan resource-rationally. My results show that practicing with the cognitive tutor improves people’s planning strategies significantly more than does practicing without feedback. (Abstract shortened by ProQuest.)</p><p>
|
957 |
Synchronizing Rhythms| Neural Oscillations Align to Rhythmic Patterns in SoundDoelling, Keith Bryant 17 November 2018 (has links)
<p> Speech perception requires that the listener identify <i>where</i> the meaningful units are (e.g., syllables) before they can identify <i> what</i> those units might be. This segmentation is difficult because there exist no clear, systematic silences between words, syllables or phonemes. One potentially useful cue is the acoustic envelope: slow (< 10 Hz) fluctuations in sound amplitude over time. Sharp increases in the envelope are loosely related to the onsets of syllables. In addition to this cue, the brain may also make use of the temporal regularity of syllables which last ~200 ms on average across languages. This quasi-rhythmicity enables prediction as a means to identify the onsets of syllables. The work presented here supports neural synchrony to the envelope at the syllabic rate as a critical mechanism to segment the sound stream. Chapter 1 and 2 show synchrony to both speech and music and demonstrate a relationship between synchrony and successful behavior. Chapter 3, following up on this work, compares the data from Chapter 2 with two competing computational models—oscillator vs evoked—and shows that the data are consistent with an oscillatory mechanism. These chapters support the oscillator as an effective means of read-in and segmentation of rhythmic input.</p><p>
|
958 |
Learning Neural Representations that Support Efficient Reinforcement LearningStachenfeld, Kimberly 21 June 2018 (has links)
<p>RL has been transformative for neuroscience by providing a normative anchor for interpreting neural and behavioral data. End-to-end RL methods have scored impressive victories with minimal compromises in autonomy, hand-engineering, and generality. The cost of this minimalism in practice is that model-free RL methods are slow to learn and generalize poorly. Humans and animals exhibit substantially improved flexibility and generalize learned information rapidly to new environment by learning invariants of the environment and features of the environment that support fast learning rapid transfer in new environments. An important question for both neuroscience and machine learning is what kind of ``representational objectives'' encourage humans and other animals to encode structure about the world. This can be formalized as ``representation feature learning,'' in which the animal or agent learns to form representations with information potentially relevant to the downstream RL process. We will overview different representational objectives that have received attention in neuroscience and in machine learning. The focus of this overview will be to first highlight conditions under which these seemingly unrelated objectives are actually mathematically equivalent. We will use this to motivate a breakdown of properties of different learned representations that are meaningfully different and can be used to inform contrasting hypotheses for neuroscience. We then use this perspective to motivate our model of the hippocampus. A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity, and policy dependence in place cells suggests that the representation is not purely spatial. We approach the problem of understanding hippocampal representations from a reinforcement learning perspective, focusing on what kind of spatial representation is most useful for maximizing future reward. We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. We go on to argue that entorhinal grid cells encode a low-dimensional basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.
|
959 |
Brain Workout| How Right and Left Brain Integration Activities Affect Maladaptive BehaviorsDellorto, Victoria 23 June 2018 (has links)
<p> The purpose of this quantitative, single-subject research was to examine the effects of hemisphere integration on maladaptive behaviors as measured by the BASC-III assessment. Morgan and Sideridis report that problem behavior rates in United Schools range from 10-30% and 92% of teacher respondents identified that problem behaviors have worsened over their careers (2013). Research has been done on the importance of neuroscience in the field of education, but there is a gap between the research and application. Baseline data on the targeted behaviors was collected by administering the BASC-III Teacher Rating Scale (TRS) on the participant to two special education teachers and a general education teacher, as well as, having the participant independently fill out the Self-Report of Personality (SRP). The student then engaged in two daily integrated hemisphere activities in the form of a Tell Me Activity. Data was collected on the frequency of errors and the duration of that activity. The intervention was administered for 30 trials. After the 30 trials, all participants were then given the BASC-III assessment again. Pre and Post BASC-III T Scores were compared to determine student growth. The participant showed growth in 20 out of 45 BASC-III categories over three TRS reports (15 categories each report). The participant also showed growth in 8 out of 15 BASC-III categories on the SRP. While although the participant showed growth, the participant showed minimum growth in functional levels. Overall, this research remains inconclusive due to the researcher’s inability to determine the functional relation between the intervention and maladaptive behaviors.</p><p>
|
960 |
Effort Discounted Decision-Making in Proactive Inhibitory ControlJanuary 2018 (has links)
abstract: Properly deciding to engage in or to withhold an action is a critical ability for goal-oriented movement control. Such decision may be driven by expected value from the choice of action but associating physical effort may discount such value. A novel anticipatory stopping task was developed to investigate effort discounted decision process potentially present in proactive inhibitory control. Subjects performed or abstained from target reach if they believed it was a Go or Stop trial respectively. Reward was awarded to a reach, correctly timed to hit a target at the same time as the moving bar in Go trials. During the Stop trials, correctly judging to not engage in a reach from the color of the moving bar that linked to the bar’s probability of stopping before the target resulted in gaining a reward. Resistive force field incurred additional physical effort for choosing to reach. Introducing effort expectedly decreased the tendency to respond at trials with higher stop probability. Surprisingly, tendency to respond increased and corresponding reaction time decreased in the trials with lower stop probability. Such asymmetric effect suggests that the value of context ineffective response is discounted, and the value of context effective response is flexibly enhanced by its associated effort cost to drive decision-process in goal-oriented manner. Medial frontal event related potential (ERP) locked to the onset of moving bar appearance reflected such effort discounted decision process. Theta band observed in Stop trials accounted for evaluation of effort and context possibly reinforcing such decision-making. / Dissertation/Thesis / Masters Thesis Biomedical Engineering 2018
|
Page generated in 0.0983 seconds