Spelling suggestions: "subject:"cognitive sciences"" "subject:"aognitive sciences""
141 |
A role for Dopamine neuron NMDA receptors in learning and decision-making / The role of Dopamine neuron NMDAs in learning and decision-makingHueske, Emily (Emily Anna-Virginia) January 2011 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2011. / Cataloged from PDF version of thesis. Vita. / Includes bibliographical references. / Midbrain dopamine has demonstrated roles in locomotion, motivation, associative learning, habit formation, action selection and cognition. The many functions of dopamine can be attributed to the multiple projection targets of midbrain dopaminergic nuclei and to the multiple characteristic modes of dopamine neuron firing, tonic and phasic. Phasic transients of midbrain dopamine neurons are widely reported to signal errors conveying discrepancies between predicted and actual reward. Knocking out NMDARs in dopamine neurons has been shown to attenuate dopaminergic phasic firing providing a potential model for delineating the functions of tonic and phasic dopamine. In order to study the role of dopamine neuron NMDARs in rewardcontingent learning, we developed an auditory-cued binary choice task using complex auditory stimuli that lend themselves to efficient learning as well as morphing. We report that mice lacking NMDARs in dopamine neurons have a deficit in learning an auditory two-alternative choice task, in the absence of changes in response vigor. Dopamine neurons respond phasically to rewards as well as reward predictive cues. Updating the reward predictive value of cues is fundamental to shaping adaptive patterns of behavior and decision-making. Given the hypothesized role of dopamine in the updating of reward predictive cues, we were interested to see if an influence of reward history would be evident in the choices made by mice lacking dopamine neuron NMDARs. In an auditory-cued binary choice task, we find an influence of the difficulty of prior successes on subsequent decisions when mice are challenged with varying degrees of discrimination difficulty. In mice lacking dopamine neuron NMDARs, we find a lack of influence of prior decision difficulty. Our results identify a modulation of choices by prior decision difficulty in mice, and demonstrate the dopamine-dependent nature of this modulation. These findings are consistent with a role for dopamine neuron phasic firing in the trial-by-trial shaping of reward contingent learning. / by Emily Hueske. / Ph.D.
|
142 |
Lateral hypothalamic control of motivated behaviors through the midbrain dopamine system / LH control of motivated behaviors through the midbrain dopamine systemNieh, Edward H. (Edward Horng-An) January 2016 (has links)
Thesis: Ph. D. in Neuroscience, Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2016. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 209-231). / The lateral hypothalamus and ventral tegmental area are two brain regions that have long been known to be involved in processing reward and the control of feeding behaviors. We continue work in this area by identifying the functional connectivity between these two regions, providing evidence that LH neurons projecting to the VTA encode conditioned responses, while LH neurons innervated by the VTA encode conditioned and unconditioned stimuli. Activation of the LH-VTA projection can increase compulsive sugar seeking, while inhibition of the projection can suppress this behavior without altering normal feeding due to hunger. We can separate this projection into the GABAergic and glutamatergic components, and we show that the GABAergic component plays a role in promoting feeding and social interaction by increasing motivation for consummatory behaviors, while the glutamatergic component largely plays a role in the suppression of these behaviors. Finally, we show that activation of the GABAergic component causes dopamine release downstream in the nucleus accumbens via disinhibition of VTA dopamine neurons through VTA GABA neurons. Together, these experiments have profoundly elucidated the functional roles of the individual circuit components of the greater mesolimbic dopamine system and provided potential targets for therapeutic intervention of overeating disorders and obesity.. / by Edward H. Nieh. / Ph. D. in Neuroscience
|
143 |
Radial glia in the developing superior colliculus : evidence for a midline barrierWu, Da-Yu January 1991 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 1991. / Includes bibliographical references (leaves 119-132). / by Da-Yu Wu. / Ph.D.
|
144 |
Timing and hippocampal information processingHale, Gregory (Gregory John) January 2015 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2015. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 88-100). / Timing is a key component in hippocampal encoding of space. I will discuss three lines of work related to this theme. First, I will describe the fine-timescale characteristics of single neurons in hippocampal subregion CAl, where theta oscillations organize groups of neurons into orderly sequences. While theta was once thought to be synchronized throughout CAl, it was recently shown instead to be offset in time along the long axis of the hippocampus. Considering distant pairs of neurons, our fundamental sequence spiking property may instead be systematically staggered by these offsets in the rhythms that pace them. I tested the impact of theta wave time offsets by recording place cell spike sequences from groups of neurons in distant parts of CAl, and found that place cell sequences more closely coordinate with each other than the underlying theta oscillations do. In regions that differ from one another by 13 milliseconds of theta delay, place cell sequences are typically aligned to within 5 milliseconds. This raises the possibility that theta wave offsets serve another purpose, perhaps timing the communication with brain areas connected to different parts of CAl, while compensatory mechanisms are in place to preserve the fine temporal alignment of place cell spatial information. Second, I will describe a tool for closed-loop experiments using information decoded from hippocampal ensembles. Place cell activity is typically extracted and analyzed only after an experiment has ended. But interrogating the timing of hippocampal information, enhancing or interfering with it, requires decoding that information immediately. I will discuss some of the difficulties and the eventual implementation of a system capable of sequence time-scale position decoding and then survey the future experimental applications. / by Gregory Hale. / Ph. D.
|
145 |
Dynamics of dopamine signaling and network activity in the striatum during learning and motivated pursuit of goalsHowe, Mark W. (Mark William) January 2013 (has links)
Thesis (Ph. D. in Neuroscience)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2013. / Cataloged from PDF version of thesis. "February 2013." / Includes bibliographical references (p. 118-126). / Learning to direct behaviors towards goals is a central function of all vertebrate nervous systems. Initial learning often involves an exploratory phase, in which actions are flexible and highly variable. With repeated successful experience, behaviors may be guided by cues in the environment that reliably predict the desired outcome, and eventually behaviors can be executed as crystallized action sequences, or "habits", which are relatively inflexible. Parallel circuits through the basal ganglia and their inputs from midbrain dopamine neurons are believed to make critical contributions to these phases of learning and behavioral execution. To explore the neural mechanisms underlying goal-directed learning and behavior, I have employed electrophysiological and electrochemical techniques to measure neural activity and dopamine release in networks of the striatum, the principle input nucleus of the basal ganglia as rats learned to pursue rewards in mazes. The electrophysiological recordings revealed training dependent dynamics in striatum local field potentials and coordinated neural firing that may differentially support both network rigidity and flexibility during pursuit of goals. Electrochemical measurements of real-time dopamine signaling during maze running revealed prolonged signaling changes that may contribute to motivating or guiding behavior. Pathological over or under-expression of these network states may contribute to symptoms experienced in a range of basal ganglia disorders, from Parkinson's disease to drug addiction. / by Mark W. Howe. / Ph.D.in Neuroscience
|
146 |
A connectomic analysis of the directional selectivity circuit in the mouse retinaGreene, Matthew (Matthew Jason) January 2016 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, June 2016. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 51-56). / This thesis addresses the question of how direction selectivity (DS) arises in the mouse retina. DS has long been observed in retinal ganglion cells, and more recently confirmed in the starburst amacrine cell. Upstream retinal bipolar cells, however, have been shown to lac, indicating that the mechanism that gives rise to DS lies in the inner plexiform layer, where the axons of bipolar cells costratify with amacrine and ganglion cells. We reconstructed a region of the IPL and identified cell types within it, and have discovered a mechanism which may explain the origin of DS activity in the mammalian retina, which relies on what we call "space-time wiring specificity." It has been suggested that a DS signal can arise from non-DS excitatory inputs if at least one among spatially segregated inputs transmits its signal with some delay, which we extend to consider also a difference in the degree to which the signal is sustained. Previously, it has been supposed that this delay occurs within the starburst amacrine cells' dendrites. We hypothesized an alternative, presynaptic mechanism. We observed that different bipolar cell types, which are believed to express different degrees of sustained activity, contact different regions of the starburst amacrine cell dendrite, giving rise to a space-time wiring specifity that should produce a DS signal. We additionally provide a model that predicts the strength of DS as a function of the spatial segregation of inputs and the temporal delay. / by Matthew Greene. / Ph. D.
|
147 |
Spatial perception and movement planning in the posterior parietal cortexMazzoni, Pietro January 1994 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 1994. / Includes bibliographical references (leaves 262-284). / by Pietro Mazzoni. / Ph.D.
|
148 |
From space to episodes : modeling memory formation in the hippocampal-neocortical system / Modeling memory formation in the hippocampal-neocortical systemKáli, Szabolcs, 1972- January 2001 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2001. / Includes bibliographical references (p. 171-196). / This thesis describes the use of mathematical, statistical, and computational methods to analyze, in two paradigmatic areas, what the hippocampus and associated structures do, and how they do it. The first model explores the formation of place fields in the hippocampus. This model is constrained by hippocampal anatomy and physiology and data on the effects of environmental manipulations on the place cell representation. It is based on an attractor network model of area CA3 in which recurrent interactions create place cell representations from location- and direction-specific activity in the entorhinal cortex, all under neuromodulatory influence. In unfamiliar environments, mossy fiber inputs impose activity patterns on CA3, and recurrent collaterals and perforant path inputs are subject to graded Hebbian plasticity. Attractors are thus sculpted in CA3, and are associated with entorhinal activity patterns. In familiar environments, place fields are controlled by the way that perforant path inputs select amongst the attractors. Depending on training experience, the model generates place fields that are either directional or non-directional, and whose changes when the environment undergoes simple geometric transformations are in accordance with experimental data. Representations of multiple environments can be stored and recalled with little interference, and have the appropriate degrees of similarity in visually similar environments. / (cont.) The second model provides a serious test of the consolidation theory of hippocampal-cortical interactions. The neocortical component of the model is a hierarchical network structure, whose primary goal is to extract statistical structure from its set of inputs through unsupervised learning. This interacts with a hippocampal component, which is capable of fast learning, cue-based recall, and off-line replay of stored patterns. The model demonstrates the feasibility of hippocampally-dependent memory consolidation in a more general and realistic setting than earlier models. It reproduces basic characteristics of retrograde amnesia, together with some related phenomena such as repetition priming. The model clarifies the relationship between memory for general (semantic) and specific (episodic) information, suggesting that part of their underlying substrate may be shared. The model highlights some problematic aspects of consolidation theory, which need to be addressed by further experimental and theoretical studies. / by Szabolcs Káli. / Ph.D.
|
149 |
A Bayesian framework for concept learningTenenbaum, Joshua B. (Joshua Brett), 1972- January 1999 (has links)
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 1999. / Includes bibliographical references (p. 297-314). / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Human concept learning presents a version of the classic problem of induction, which is made particularly difficult by the combination of two requirements: the need to learn from a rich (i.e. nested and overlapping) vocabulary of possible concepts and the need to be able to generalize concepts reasonably from only a few positive examples. I begin this thesis by considering a simple number concept game as a concrete illustration of this ability. On this task, human learners can with reasonable confidence lock in on one out of a billion billion billion logically possible concepts, after seeing only four positive examples of the concept, and can generalize informatively after seeing just a single example. Neither of the two classic approaches to inductive inference hypothesis testing in a constrained space of possible rules and computing similarity to the observed examples can provide a complete picture of how people generalize concepts in even this simple setting. This thesis proposes a new computational framework for understanding how people learn concepts from examples, based on the principles of Bayesian inference. By imposing the constraints of a probabilistic model of the learning situation, the Bayesian learner can draw out much more information about a concept's extension from a given set of observed examples than either rule-based or similarity-based approaches do, and can use this information in a rational way to infer the probability that any new object is also an instance of the concept. There are three components of the Bayesian framework: a prior probability distribution over a hypothesis space of possible concepts; a likelihood function, which scores each hypothesis according to its probability of generating the observed examples; and the principle of hypothesis averaging, under which the learner computes the probability of generalizing a concept to new objects by averaging the predictions of all hypotheses weighted by their posterior probability (proportional to the product of their priors and likelihoods). The likelihood, under the assumption of randomly sampled positive examples, embodies the size principle for scoring hypotheses: smaller consistent hypotheses are more likely than larger hypotheses, and they become exponentially more likely as the number of observed examples increases. The principle of hypothesis averaging allows the Bayesian framework to accommodate both rule-like and similarity-like generalization behavior, depending on how peaked the posterior probability is. Together, the size principle plus hypothesis averaging predict a convergence from similarity-like generalization (due to a broad posterior distribution) after very few examples are observed to rule-like generalization (due to a sharply peaked posterior distribution) after sufficiently many examples have been observed. The main contributions of this thesis are as follows. First and foremost, I show how it is possible for people to learn and generalize concepts from just one or a few positive examples (Chapter 2). Building on that understanding, I then present a series of case studies of simple concept learning situations where the Bayesian framework yields both qualitative and quantitative insights into the real behavior of human learners (Chapters 3-5). These cases each focus on a different learning domain. Chapter 3 looks at generalization in continuous feature spaces, a typical representation of objects in psychology and machine learning with the virtues of being analytically tractable and empirically accessible, but the downside of being highly abstract and artificial. Chapter 4 moves to the more natural domain of learning words for categories of objects and shows the relevance of the same phenomena and explanatory principles introduced in the more abstract setting of Chapters 1-3 for real-world learning tasks like this one. In each of these domains, both similarity-like and rule-like generalization emerge as special cases of the Bayesian framework in the limits of very few or very many examples, respectively. However, the transition from similarity to rules occurs much faster in the word learning domain than in the continuous feature space domain. I propose a Bayesian explanation of this difference in learning curves that places crucial importance on the density or sparsity of overlapping hypotheses in the learner's hypothesis space. To test this proposal, a third case study (Chapter 5) returns to the domain of number concepts, in which human learners possess a more complex body of prior knowledge that leads to a hypothesis space with both sparse and densely overlapping components. Here, the Bayesian theory predicts and human learners produce either rule-based or similarity-based generalization from a few examples, depending on the precise examples observed. I also discusses how several classic reasoning heuristics may be used to approximate the much more elaborate computations of Bayesian inference that this domain requires. In each of these case studies, I confront some of the classic questions of concept learning and induction: Is the acquisition of concepts driven mainly by pre-existing knowledge or the statistical force of our observations? Is generalization based primarily on abstract rules or similarity to exemplars? I argue that in almost all instances, the only reasonable answer to such questions is, Both. More importantly, I show how the Bayesian framework allows us to answer much more penetrating versions of these questions: How does prior knowledge interact with the observed examples to guide generalization? Why does generalization appear rule-based in some cases and similarity-based in others? Finally, Chapter 6 summarizes the major contributions in more detailed form and discusses how this work ts into the larger picture of contemporary research on human learning, thinking, and reasoning. / by Joshua B. Tenenbaum. / Ph.D.
|
150 |
Towards a unified account of face (and maybe object) processingTan, Cheston Y.-C. (Cheston Yin-Chet) January 2012 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2012. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (p. 191-197). / Faces are an important class of visual stimuli, and are thought to be processed differently from objects by the human visual system. Going beyond the false dichotomy of same versus different processing, it is more important to understand how exactly faces are processed similarly or differently from objects. However, even by itself, face processing is poorly understood. Various aspects of face processing, such as holistic, configural, and face-space processing, are investigated in relative isolation, and the relationships between these are unclear. Furthermore, face processing is characteristically affected by various stimulus transformations such as inversion, contrast reversal and spatial frequency filtering, but how or why is unclear. Most importantly, we do not understand even the basic mechanisms of face processing. We hypothesize that what makes face processing distinctive is the existence of large, coarse face templates. We test our hypothesis by modifying an existing model of object processing to utilize such templates, and find that our model can account for many face-related phenomena. Using small, fine face templates as a control, we find that our model displays object-like processing characteristics instead. Overall, we believe that we may have made the first steps towards achieving a unified account of face processing. In addition, results from our control suggest that face and object processing share fundamental computational mechanisms. Coupled with recent advances in brain recording techniques, our results mean that face recognition could form the "tip of the spear" for attacking and solving the problem of visual recognition. / by Cheston Y.-C. Tan. / Ph.D.
|
Page generated in 0.0885 seconds