• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 5
  • 5
  • 4
  • 1
  • Tagged with
  • 43
  • 43
  • 12
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Some Theoretical Contributions To The Mutual Exclusion Problem

Alagarsamy, K 04 1900 (has links) (PDF)
No description available.
32

Exploration du lien entre la qualité de la mentalisation et l'efficacité du rappel autobiographique

Dauphin, Julie January 2008 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal.
33

The use of memory state knowledge to improve computer memory system organization

Isen, Ciji 01 June 2011 (has links)
The trends in virtualization as well as multi-core, multiprocessor environments have translated to a massive increase in the amount of main memory each individual system needs to be fitted with, so as to effectively utilize this growing compute capacity. The increasing demand on main memory implies that the main memory devices and their issues are as important a part of system design as the central processors. The primary issues of modern memory are power, energy, and scaling of capacity. Nearly a third of the system power and energy can be from the memory subsystem. At the same time, modern main memory devices are limited by technology in their future ability to scale and keep pace with the modern program demands thereby requiring exploration of alternatives to main memory storage technology. This dissertation exploits dynamic knowledge of memory state and memory data value to improve memory performance and reduce memory energy consumption. A cross-boundary approach to communicate information about dynamic memory management state (allocated and deallocated memory) between software and hardware viii memory subsystem through a combination of ISA support and hardware structures is proposed in this research. These mechanisms help identify memory operations to regions of memory that have no impact on the correct execution of the program because they were either freshly allocated or deallocated. This inference about the impact stems from the fact that, data in memory regions that have been deallocated are no longer useful to the actual program code and data present in freshly allocated memory is also not useful to the program because the dynamic memory has not been defined by the program. By being cognizant of this, such memory operations are avoided thereby saving energy and improving the usefulness of the main memory. Furthermore, when stores write zeros to memory, the number of stores to the memory is reduced in this research by capturing it as compressed information which is stored along with memory management state information. Using the methods outlined above, this dissertation harnesses memory management state and data value information to achieve significant savings in energy consumption while extending the endurance limit of memory technologies. / text
34

Exploration du lien entre la qualité de la mentalisation et l'efficacité du rappel autobiographique

Dauphin, Julie January 2008 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal
35

[en] MEMORY IN SCHOOL-AGED CHILDREN: A NEUROPSYCHOLOGICAL PERSPECTIVE / [pt] MEMÓRIA DE CRIANÇAS EM IDADE ESCOLAR: UMA PERSPECTIVA NEUROPSICOLÓGICA

LUCIANA BROOKING TERESA DIAS 09 August 2018 (has links)
[pt] A memória se apresenta em sistemas distintos e interligados. Ela permite a constituição do sujeito e sua interação com o meio em que vive. Durante o desenvolvimento, mudanças biológicas e comportamentais vão ocorrendo, algumas vezes de forma rápida e outras, lentamente, respeitando a maturação neuronal, a interação social e a cultura em que vive. Nesse contexto, a emoção tem um papel modulador das funções cognitivas, fortalecendo ou enfraquecendo o armazenamento de uma informação, ou seja, influenciando a memória. Seu armazenamento pode ser sensorial, de curto e de longo prazo e ela pode se dividir em estágios (codificação, armazenamento e recuperação) e em tipos (explícita ou declarativa e implícita ou não declarativa). A memória explícita se subdivide em episódica e semântica, a implícita inclui os hábitos e habilidades, e a memória de curto prazo inclui a memória de trabalho. As áreas cerebrais envolvidas são o hipocampo, lobos frontal e temporal e a amígdala. Há distinção dos sistemas de memória durante o desenvolvimento: bebês reproduzem ações, reconhecem faces e eventos familiares e apresentam memória implícita (que não se altera muito ao longo do desenvolvimento); crianças pré-escolares apresentam uma memória mais sofisticada, organizando melhor as informações; e na fase escolar a memória já se encontra mais desenvolvida. O estudo mostrou que a memória semântica melhora gradualmente com a idade, acompanhando a ampliação de vocabulário; a memória episódica se desenvolve de forma mais pontual; e a memória de trabalho apresenta maturação mais tardia, acompanhando o desenvolvimento das funções executivas. / [en] The memory is divided into different systems and interconnected. It allows the creation of the subject and its interaction with the environment in which they live. During the development, behavioral and biological changes are occurring, sometimes quickly and others slowly, respecting the neuronal maturation, social interaction and culture in which they live. In this aspect, some skills are innate and others, acquired, learned. In this context, emotion plays a modulator of cognitive functions, strengthening or weakening the storage of information, ie, influencing memory. It can present divided into stages (sensory, short term and long term), in steps (encoding, storage and retrieval), and types (declarative or explicit and implicit or non-declarative). The explicit type is subdivided into episodic and semantic, the implicit include the habits and skills, and the short-term memory includes the working memory. The brain areas involved are the hippocampus, frontal and temporal lobes, and amygdale. in the formation of new memories and the recognition and consolidation during learning, and amygdala, allowing storage of the episodes that involve more emotion. There are distinctions in the memory systems during the development: babies reproduce actions, recognize faces and family events and have implicit memory (which does not change much throughout development), preschool children have a more sophisticated memory by organizing the information better; and during school memory is already more developed. The study showed that semantic memory improves gradually with age, following the expansion of vocabulary; episodic memory develops in a more timely, and working memory presents late maturation, following the development of executive functions.
36

Sistemas de memórias de tradução e tecnologias de tradução automática: possíveis efeitos na produção de tradutores em formação / Translation memory systems and machine translation: possible effects on the production of translation trainees

Talhaferro, Lara Cristina Santos 26 February 2018 (has links)
Submitted by Lara Cristina Santos Talhaferro null (lara.talhaferro@hotmail.com) on 2018-03-07T01:06:11Z No. of bitstreams: 1 Dissertação_LaraCSTalhaferro_2018.pdf: 4550332 bytes, checksum: 634c0356d3f9c55e334ef6a26a877056 (MD5) / Approved for entry into archive by Elza Mitiko Sato null (elzasato@ibilce.unesp.br) on 2018-03-07T15:46:44Z (GMT) No. of bitstreams: 1 talhaferro_lcs_me_sjrp.pdf: 4550332 bytes, checksum: 634c0356d3f9c55e334ef6a26a877056 (MD5) / Made available in DSpace on 2018-03-07T15:46:44Z (GMT). No. of bitstreams: 1 talhaferro_lcs_me_sjrp.pdf: 4550332 bytes, checksum: 634c0356d3f9c55e334ef6a26a877056 (MD5) Previous issue date: 2018-02-26 / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / O processo da globalização, que tem promovido crescente circulação de informações multilíngues em escala mundial, tem proporcionado notáveis mudanças no mercado da tradução. No contexto globalizado, para manterem-se competitivos e atenderem à demanda de trabalho, a qual conta com frequentes atualizações de conteúdo e prazos reduzidos, os tradutores passaram a adotar ferramentas de tradução assistidas por computador em sua rotina de trabalho. Duas dessas ferramentas, utilizadas principalmente por tradutores das áreas técnica, científica e comercial, são os sistemas de memórias de tradução e as tecnologias de tradução automática. O emprego de tais recursos pode ter influências imprevisíveis nas traduções, sobre as quais os tradutores raramente têm oportunidade de ponderar. Se os profissionais são iniciantes ou se lhes falta experiência em determinada ferramenta, essa influência pode ser ainda maior. Considerando que os profissionais novatos tendem a utilizar cada vez mais as ferramentas disponíveis para aumentar sua eficiência, neste trabalho são investigados os possíveis efeitos do uso de sistemas de memórias de tradução e tecnologias de tradução automática, especificamente o sistema Wordfast Anywhere e um de seus tradutores automáticos, o Google Cloud Translate API, nas escolhas de graduandos em Tradução. Foi analisada a aplicação dessas ferramentas na tradução (inglês/português) de quatro abstracts designados a dez alunos do quarto ano do curso de Bacharelado em Letras com Habilitação de Tradutor da Unesp de São José do Rio Preto, divididos em três grupos: os que fizeram o uso do Wordfast Anywhere, os que utilizaram essa ferramenta para realizar a pós-edição da tradução feita pelo Google Cloud Translate API e os que não utilizaram nenhuma dessas ferramentas para traduzir os textos. Tal exame consistiu de uma análise numérica entre as traduções, com a ajuda do software Turnitin e uma análise contrastiva da produção dos alunos, em que foram considerados critérios como tempo de realização da tradução, emprego da terminologia específica, coesão e coerência textual, utilização da norma culta da língua portuguesa e adequação das traduções ao seu fim. As traduções também passaram pelo exame de profissionais das áreas sobre as quais tratam os abstracts, para avaliá-las do ponto de vista de um usuário do material traduzido. Além de realizarem as traduções, os alunos responderam a um questionário, em que esclarecem seus hábitos e suas percepções sobre as ferramentas computacionais de tradução. A análise desses trabalhos indica que a automação não influenciou significativamente na produção das traduções, confirmando nossa hipótese de que o tradutor tem papel central nas escolhas terminológicas e na adequação do texto traduzido a seu fim. / Globalization has promoted a growing flow of multilingual information worldwide, causing significant changes in translation market. In this scenario, translators have been employing computer-assisted translation tools (CAT Tools) in a proficient way to meet the demand for information translated into different languages in condensed turnarounds. Translation memory systems and machine translation are two of these tools, used especially when translating technical, scientific and commercial texts. This configuration may have inevitable influences in the production of translated texts. Nonetheless, translators seldom have the opportunity to ponder on how their production may be affected by the use of these tools, especially if they are novice in the profession or lack experience with the tools used. Seeking to examine how the work of translators in training may be influenced by translation memory systems and machine translation technologies they employ, this work investigates how a translation memory system, Wordfast Anywhere, and one of its machine translation tools, Google Cloud Translate API, may affect the choices of Translation trainees. To achieve this goal, we present an analysis of English-to-Portuguese translations of four abstracts assigned to ten students of the undergraduate Program in Languages with Major in Translation at São Paulo State University, divided into three groups: one aided by Wordfast Anywhere, one aided by Google Cloud Translate API, and one unassisted by any of these tools. This study consists of a numerical analysis, assisted by Turnitin, and a comparative analysis, whose aspects examined are the following: time spent to perform the translation, use of specific terminology, cohesion and coherence, use of standard Portuguese, and suitability for their purposes. Apart from this analysis, a group of four experts were consulted on the translations as users of their content. Finally, the students filled a questionnaire on their habits and perceptions on CAT Tools. The examination of their work suggests that automation did not influence the production of the translations significantly, confirming our hypothesis that human translators are at the core of decision-making when it comes to terminological choices and suitability of translated texts to their purpose. / 2016/07907-0
37

Fault tolerance for stream programs on parallel platforms

Sanz-Marco, Vicent January 2015 (has links)
A distributed system is defined as a collection of autonomous computers connected by a network, and with the appropriate distributed software for the system to be seen by users as a single entity capable of providing computing facilities. Distributed systems with centralised control have a distinguished control node, called leader node. The main role of a leader node is to distribute and manage shared resources in a resource-efficient manner. A distributed system with centralised control can use stream processing networks for communication. In a stream processing system, applications typically act as continuous queries, ingesting data continuously, analyzing and correlating the data, and generating a stream of results. Fault tolerance is the ability of a system to process the information, even if it happens any failure or anomaly in the system. Fault tolerance has become an important requirement for distributed systems, due to the possibility of failure has currently risen to the increase in number of nodes and the runtime of applications in distributed system. Therefore, to resolve this problem, it is important to add fault tolerance mechanisms order to provide the internal capacity to preserve the execution of the tasks despite the occurrence of faults. If the leader on a centralised control system fails, it is necessary to elect a new leader. While leader election has received a lot of attention in message-passing systems, very few solutions have been proposed for shared memory systems, as we propose. In addition, rollback-recovery strategies are important fault tolerance mechanisms for distributed systems, since that it is based on storing information into a stable storage in failure-free state and when a failure affects a node, the system uses the information stored to recover the state of the node before the failure appears. In this thesis, we are focused on creating two fault tolerance mechanisms for distributed systems with centralised control that uses stream processing for communication. These two mechanism created are leader election and log-based rollback-recovery, implemented using LPEL. The leader election method proposed is based on an atomic Compare-And-Swap (CAS) instruction, which is directly available on many processors. Our leader election method works with idle nodes, meaning that only the non-busy nodes compete to become the new leader while the busy nodes can continue with their tasks and later update their leader reference. Furthermore, this leader election method has short completion time and low space complexity. The log-based rollback-recovery method proposed for distributed systems with stream processing networks is a novel approach that is free from domino effect and does not generate orphan messages accomplishing the always-no-orphans consistency condition. Additionally, this approach has lower overhead impact into the system compared to other approaches, and it is a mechanism that provides scalability, because it is insensitive to the number of nodes in the system.
38

Seismic modeling and imaging with Fourier method : numerical analyses and parallel implementation strategies

Chu, Chunlei, 1977- 13 June 2011 (has links)
Our knowledge of elastic wave propagation in general heterogeneous media with complex geological structures comes principally from numerical simulations. In this dissertation, I demonstrate through rigorous theoretical analyses and comprehensive numerical experiments that the Fourier method is a suitable method of choice for large scale 3D seismic modeling and imaging problems, due to its high accuracy and computational efficiency. The most attractive feature of the Fourier method is its ability to produce highly accurate solutions on relatively coarser grids, compared with other numerical methods for solving wave equations. To further advance the Fourier method, I identify two aspects of the method to focus on in this work, i.e., its implementation on modern clusters of computers and efficient high-order time stepping schemes. I propose two new parallel algorithms to improve the efficiency of the Fourier method on distributed memory systems using MPI. The first algorithm employs non-blocking all-to-all communications to optimize the conventional parallel Fourier modeling workflows by overlapping communication with computation. With a carefully designed communication-computation overlapping mechanism, a large amount of communication overhead can be concealed when implementing different kinds of wave equations. The second algorithm combines the advantages of both the Fourier method and the finite difference method by using convolutional high-order finite difference operators to evaluate the spatial derivatives in the decomposed direction. The high-order convolutional finite difference method guarantees a satisfactory accuracy and provides the flexibility of using non-blocking point-to-point communications for efficient interprocessor data exchange and the possibility of overlapping communication and computation. As a result, this hybrid method achieves an optimized balance between numerical accuracy and computational efficiency. To improve the overall accuracy of time domain Fourier simulations, I propose a family of new high-order time stepping schemes, based on a novel algorithm for designing time integration operators, to reduce temporal derivative discretization errors in a cost-effective fashion. I explore the pseudo-analytical method and propose high-order formulations to further improve its accuracy and ability to deal with spatial heterogeneities. I also extend the pseudo-analytical method to solve the variable-density acoustic and elastic wave equations. I thoroughly examine the finite difference method by conducting complete numerical dispersion and stability analyses. I comprehensively compare the finite difference method with the Fourier method and provide a series of detailed benchmarking tests of these two methods under a number of different simulation configurations. The Fourier method outperforms the finite difference method, in terms of both accuracy and efficiency, for both the theoretical studies and the numerical experiments, which provides solid evidence that the Fourier method is a superior scheme for large scale seismic modeling and imaging problems. / text
39

Multi-Core Memory System Design : Developing and using Analytical Models for Performance Evaluation and Enhancements

Dwarakanath, Nagendra Gulur January 2015 (has links) (PDF)
Memory system design is increasingly influencing modern multi-core architectures from both performance and power perspectives. Both main memory latency and bandwidth have im-proved at a rate that is slower than the increase in processor core count and speed. Off-chip memory, primarily built from DRAM, has received significant attention in terms of architecture and design for higher performance. These performance improvement techniques include sophisticated memory access scheduling, use of multiple memory controllers, mitigating the impact of DRAM refresh cycles, and so on. At the same time, new non-volatile memory technologies have become increasingly viable in terms of performance and energy. These alternative technologies offer different performance characteristics as compared to traditional DRAM. With the advent of 3D stacking, on-chip memory in the form of 3D stacked DRAM has opened up avenues for addressing the bandwidth and latency limitations of off-chip memory. Stacked DRAM is expected to offer abundant capacity — 100s of MBs to a few GBs — at higher bandwidth and lower latency. Researchers have proposed to use this capacity as an extension to main memory, or as a large last-level DRAM cache. When leveraged as a cache, stacked DRAM provides opportunities and challenges for improving cache hit rate, access latency, and off-chip bandwidth. Thus, designing off-chip and on-chip memory systems for multi-core architectures is complex, compounded by the myriad architectural, design and technological choices, combined with the characteristics of application workloads. Applications have inherent spatial local-ity and access parallelism that influence the memory system response in terms of latency and bandwidth. In this thesis, we construct an analytical model of the off-chip main memory system to comprehend this diverse space and to study the impact of memory system parameters and work-load characteristics from latency and bandwidth perspectives. Our model, called ANATOMY, uses a queuing network formulation of the memory system parameterized with workload characteristics to obtain a closed form solution for the average miss penalty experienced by the last-level cache. We validate the model across a wide variety of memory configurations on four-core, eight-core and sixteen-core architectures. ANATOMY is able to predict memory latency with average errors of 8.1%, 4.1%and 9.7%over quad-core, eight-core and sixteen-core configurations respectively. Further, ANATOMY identifie better performing design points accurately thereby allowing architects and designers to explore the more promising design points in greater detail. We demonstrate the extensibility and applicability of our model by exploring a variety of memory design choices such as the impact of clock speed, benefit of multiple memory controllers, the role of banks and channel width, and so on. We also demonstrate ANATOMY’s ability to capture architectural elements such as memory scheduling mechanisms and impact of DRAM refresh cycles. In all of these studies, ANATOMY provides insight into sources of memory performance bottlenecks and is able to quantitatively predict the benefit of redressing them. An insight from the model suggests that the provisioning of multiple small row-buffers in each DRAM bank achieves better performance than the traditional one (large) row-buffer per bank design. Multiple row-buffers also enable newer performance improvement opportunities such as intra-bank parallelism between data transfers and row activations, and smart row-buffer allocation schemes based on workload demand. Our evaluation (both using the analytical model and detailed cycle-accurate simulation) shows that the proposed DRAM re-organization achieves significant speed-up as well as energy reduction. Next we examine the role of on-chip stacked DRAM caches at improving performance by reducing the load on off-chip main memory. We extend ANATOMY to cover DRAM caches. ANATOMY-Cache takes into account all the key parameters/design issues governing DRAM cache organization namely, where the cache metadata is stored and accessed, the role of cache block size and set associativity and the impact of block size on row-buffer hit rate and off-chip bandwidth. Yet the model is kept simple and provides a closed form solution for the aver-age miss penalty experienced by the last-level SRAM cache. ANATOMY-Cache is validated against detailed architecture simulations and shown to have latency estimation errors of 10.7% and 8.8%on average in quad-core and eight-core configurations respectively. An interesting in-sight from the model suggests that under high load, it is better to bypass the congested DRAM cache and leverage the available idle main memory bandwidth. We use this insight to propose a refresh reduction mechanism that virtually eliminates refresh overhead in DRAM caches. We implement a low-overhead hardware mechanism to record accesses to recent DRAM cache pages and refresh only these pages. Older cache pages are considered invalid and serviced from the (idle) main memory. This technique achieves average refresh reduction of 90% with resulting memory energy savings of 9%and overall performance improvement of 3.7%. Finally, we propose a new DRAM cache organization that achieves higher cache hit rate, lower latency and lower off-chip bandwidth demand. Called the Bi-Modal Cache, our cache organization brings three independent improvements together: (i) it enables parallel tag and data accesses, (ii) it eliminates a large fraction of tag accesses entirely by use of a novel way locator and (iii) it improves cache space utilization by organizing the cache sets as a combination of some big blocks (512B) and some small blocks (64B). The Bi-Modal Cache reduces hit latency by use of the way locator and parallel tag and data accesses. It improves hit rate by leveraging the cache capacity efficiently – blocks with low spatial reuse are allocated in the cache at 64B granularity thereby reducing both wasted off-chip bandwidth as well as cache internal fragmentation. Increased cache hit rate leads to reduction in off-chip bandwidth demand. Through detailed simulations, we demonstrate that the Bi-Modal Cache achieves overall performance improvement of 10.8%, 13.8% and 14.0% in quad-core, eight-core and sixteen-core workloads respectively over an aggressive baseline.
40

Rôle de l'acétylation des histones dans différentes formes de mémoire impliquant l'hippocampe et le striatum chez la souris. : effet du vieillissement

Dagnas, Malorie 14 December 2012 (has links)
Les modifications post-traductionnelles des histones jouent un rôle majeur dans la régulation de l’expression de gènes impliqués dans la plasticité et la mémoire. Parmi ces modifications, l’acétylation des histones permet le maintien de la chromatine dans un état « permissif », accessible pour la transcription. Nos travaux visent à identifier le rôle joué par l’acétylation de deux histones, H3 et H4, dans la formation de différentes formes de mémoire mettant en jeu les systèmes hippocampique et striatal chez la souris. Nous avons également recherché si des perturbations d’acétylation des histones sont responsables des déficits mnésiques observés au cours du vieillissement. Nous avons utilisé deux types d’apprentissage en piscine de Morris permettant de dissocier la mémoire spatiale, impliquant principalement l’hippocampe et la mémoire procédurale/indicée, impliquant le striatum. Nos résultats mettent en lumière une régulation différentielle de l’acétylation des histones dans l’hippocampe et le striatum selon la nature de la tâche et l’âge des animaux. L’apprentissage spatial induit une augmentation de l’acétylation des histones sélectivement dans l’hippocampe (CA1 et gyrus denté) alors que la tâche indicée augmente l’acétylation des histones spécifiquement dans le striatum. Nous montrons également que des changements opposés de l’acétylation de H3 (augmentation) et de H4 (diminution) dans l’hippocampe pourraient contribuer aux déficits de mémoire spatiale observés chez les souris âgées. Lors d’un test de compétition en piscine de Morris, durant lequel les souris ont le choix entre les stratégies spatiale et indicée pour résoudre la tâche, l’injection intra-hippocampique de Trichostatine A (TSA), un inhibiteur des histones déacétylases, immédiatement après l’apprentissage, perturbe la fonction striatale et favorise l’utilisation préférentielle de la stratégie spatiale hippocampique. Cependant, cet effet de la TSA est absent chez les souris âgées dont la fonction hippocampique est altérée. Dans une dernière série d’expérience, l’analyse des effets d’une injection intra-hippocampique de TSA, après un apprentissage spatial, a permis de préciser les contributions respectives des histones H3/H4 et du facteur de transcription CREB dans les déficits mnésiques associés au vieillissement. Dans leur ensemble, nos travaux apportent des éléments importants concernant l’importance de l’acétylation des histones dans la modulation des interactions entre systèmes de mémoire hippocampique et striatal. / Post-translational modifications of histone proteins play a crucial role in regulating plasticity and memory-related gene expression. Among these modifications, histone acetylation leads to a relaxed or “opened” chromatin state, permissive for transcription. Our work aims to identify the role played by histone H3 and H4 acetylation in the formation of different forms of memory involving hippocampal and striatal systems in mice. We also examined whether alterations of histone acetylation are responsible for age-associated memory deficits. We used two versions of the Morris water maze learning task to dissociate a spatial form of memory that relies on the hippocampus and a procedural/cued memory supported by the striatum. Our results highlight a differential regulation of histone acetylation within the hippocampus and striatum depending on the nature of the task and age of animals. Spatial and cued learning elicited histone acetylation selectively in the hippocampus (CA1 region and dentate gyrus) and the striatum, respectively. Age-related spatial memory deficits were associated with opposite changes in H3 acetylation (increase) and H4 (decrease) selectively in the hippocampus. During a water maze competition task in which mice can choose between spatial and cue-guided strategies, intra-hippocampal injection of Trichostatin A (TSA), an histone deacetylase inhibitor, immediately post-acquisition, impaired striatal function and promoted the use of a hippocampus-based spatial strategy. However, this effect of TSA was absent in old mice in which hippocampal function is impaired. In a final series of experiments, analysis of the effects of intra-hippocampal TSA injection immediately after a spatial training helped to clarify the respective contributions of histone H3/H4 and the transcription factor CREB in spatial memory deficits associated with aging. Taken together, our work provides important information regarding the importance of histone acetylation in modulating interactions between hippocampal and striatal memory systems.

Page generated in 0.055 seconds