We successfully navigate the world by making decisions based on what we have learned. In the brain, two prominent learning systems have been identified and each is likely to guide decisions in different ways. Research on decision making has primarily focused on a reward learning system in the striatum. These studies have illuminated the how repeated choices and rewards build representations that guide choices and actions when encountering the same situation again. However, in a constantly changing environment, choices may not repeat themselves. Further, the environment may have more structure than simple reward learning can navigate. In these situations, decisions may be guided by a different learning system, namely a flexible learning system in the hippocampus which encodes episodes, or more broadly, relations between stimuli. However, investigations into the role of a reward learning system and a relational learning system in decision making have developed largely independently of each other. In the studies described below, I explore the function of these learning systems in value-guided decision making. Complementarily, I also explore how ongoing reward learning may modulate memory formation in the hippocampal system. In these studies, I demonstrate that reward learning and decision making is influenced by relational learning, and that these effects are predicted by hippocampal-striatal connectivity during learning. Separately, I establish that episodic memory is, in turn, influenced by ongoing reward learning. Successful memory is predicted by modulations of reward and memory regions including the striatum and hippocampus. Overall, these results provide novel insights into the learning systems encoding memories for future adaptive behavior.
Identifer | oai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/D8MW2Q8M |
Date | January 2012 |
Creators | Wimmer, George Elliott |
Source Sets | Columbia University |
Language | English |
Detected Language | English |
Type | Theses |
Page generated in 0.002 seconds