• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 5
  • 5
  • 4
  • 1
  • Tagged with
  • 43
  • 43
  • 12
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

MULTIPLE MEMORY SYSTEMS IN PEOPLE WITH SCHIZOPHRENIA: POSSIBLE EFFECT OF ATYPICAL ANTI-PSYCHOTIC MEDICATIONS

Steel, RYLAND 23 July 2013 (has links)
Patients with schizophrenia are normally treated with one of several antipsychotic medications that differ from one another in the areas of the brain they affect including the dorsal striatum, a subcortical section of the forebrain, and the prefrontal cortex (PFC), located in the anterior part of the frontal lobes. Two different tests of implicit memory, the probabilistic classification learning (PCL) and the Iowa gambling task (IGT), have been shown to rely on the dorsal striatum and the PFC, respectively. Studies have previously shown that patients with schizophrenia treated with antipsychotics that affect the dorsal striatum (e.g., risperidone), have altered performance on the PCL, and those treated with antipsychotics that affect the PFC (e.g., clozapine), have altered performance on the IGT. We tested the hypothesis that patients with schizophrenia treated with olanzapine would have a poorer performance on the IGT, but not the PCL, when compared with controls. This study aimed to clarify conflicting results from prior experiments observing the effects of olanzapine on implicit memory in people with schizophrenia. We also hypothesized that performance of patients taking aripiprazole would be comparable to those taking risperidone, or an FGA; however, we were unable to recruit a sufficient amount of participants to test this hypothesis. Patients with schizophrenia, a mental disorder characterized by a breakdown in relation between thoughts, emotion, and behavior, treated with olanzapine were recruited through local psychiatric clinics or using a newspaper ad. Administration of the Brief Psychiatric Rating Scale (BPRS) and the Mini Mental State Examination (MMSE) preceded a brief questionnaire of demographic information. Participants were tested on the PCL and the IGT using a personal computer. Results revealed poorer performance on both the MMSE and BPRS for patients when compared with controls. Patients taking olanzapine were impaired in learning the PCL but not the IGT when compared with controls. Results suggest that olanzapine acts on the PFC to augment IGT performance but further studies are needed. / Thesis (Master, Neuroscience Studies) -- Queen's University, 2013-07-23 15:09:21.55
12

L’adolescence, une période de vulnérabilité aux effets de régimes obésogènes sur la mémoire : études des fonctions hippocampiques et amygdaliennes / Adolescence, a vulnerable period to the effects of obesogenic diets on memory : Special emphasis on hippocampal and amygdala systems

Boitard, Chloé 13 December 2013 (has links)
L’obésité, considérée comme pandémique, est associée à l’apparition de troubles cognitifs et émotionnels chez l’Homme comme chez l’animal. La prévalence de l’obésité augmente de manière drastique chez les enfants et les adolescents. Or l’adolescence est une période primordiale pour la maturation des structures cérébrales (notamment l’hippocampe et l’amygdale) qui vont sous-tendre les processus cognitifs pour le restant de la vie de l’individu. Cependant, aucune étude n’avait investigué la potentielle vulnérabilité de cette période développementale aux effets de l’obésité sur la mémoire, comparativement à l’âge adulte. Nous avons donc effectué cette comparaison chez le rongeur, en modélisant l’obésité par une exposition à un régime hyper-lipidique (HL) pendant une période incluant l’adolescence versus à l’âge adulte uniquement (i.e. excluant l’adolescence). Nous mettons en évidence que l’obésité induite à l’adolescence provoque des altérations mnésiques, qui ne sont pas retrouvés lorsque l’obésité est induite à l’âge adulte. La majorité des études sur les effets de l’obésité ayant mis en évidence une altération des mémoires dépendantes de l’hippocampe, nous nous sommes tout d’abord focalisés sur les fonctions hippocampiques. Nous avons ensuite exploré le système amygdalien, impliqué dans les mémoires émotionnelles et peu étudié dans le cadre de l’obésité. Ces deux systèmes fonctionnels ont été appréhendés au travers d’approches comportementales visant à évaluer les performances mnésiques, mais également d’approches d’imagerie cellulaire et d’électrophysiologie afin d’évaluer la plasticité cellulaire au sein de ces structures. Nous mettons en évidence que l’obésité induite à l’adolescence impacte la mémoire et la plasticité de ces systèmes de manière bidirectionnelle en dégradant les fonctions hippocampiques et en exacerbant les fonctions amygdaliennes. Concernant les mécanismes impliqués dans ces effets nous mettons en évidence l’existence d’une exacerbation de la réponse inflammatoire spécifiquement au niveau de l’hippocampe chez les animaux exposés au régime HL à l’adolescence, ce qui pourrait expliquer les déficits des fonctions hippocampiques. Enfin, nous montrons que la dérégulation de l’axe corticotrope chez ces animaux est responsable des effets comportementaux et cellulaires observés au niveau des fonctions amygdaliennes. L’ensemble de ces résultats montre l’urgence de développer les études sur l’obésité juvénile, dont les effets importants sur les fonctions cognitives et émotionnelles pourraient engendrer une altération importante de la qualité de vie et une prise en charge accrue de ces sujets tout au long de leur vie. / The obesity pandemic is linked to cognitive and emotional disorders in humans. Obesity prevalence in adolescence is increasing at an alarming rate. Adolescence is a crucial period for maturation of brain structures, like hippocampus and amygdala, particularly important for neurocognitive shaping required for whole life duration. However, no study investigated a potential higher vulnerability of this specific developmental period to the effects of obesity on memory. For this purpose we compared, in rodent models, the effects of high-fat diet (HFD)-induced obesity during adolescence (from weaning to adulthood) or at adulthood. We were able to demonstrate that adolescence is more vulnerable than adulthood to the effects of obesity on memory. Most of the studies interested in the effects of obesity on memory found deficits in hippocampal dependent memories. We therefore first focused on hippocampal functioning. We also extended this investigation to another memory system depending on amygdala, since little was known about the effects of obesity on this structure. Using behavioral approaches to evaluate memory performances, but also cellular imaging and electrophysiology to assess cellular plasticity, we evidenced that juvenile obesity affects both memory and plasticity in a bidirectional way, impairing hippocampal function and enhancing amygdala function. Looking for mechanisms to explain these effects, we found a potentiated inflammatory response specifically in the hippocampus that could explain decreased hippocampal function. We also demonstrated that deregulation of hypothalamo-pituitary-adrenal axis is responsible for increased amygdala plasticity and memory. Altogether these results suggest that obesity during adolescence predisposes to later maladaptive cognitive and emotional functions. This is a major concern as it could induce a significant impairment in quality of life of these individuals and contribute considerably to their social and occupational dysfunction.
13

The Influence of Communication Networks and Turnover on Transactive Memory Systems and Team Performance

Kush, Jonathan 01 May 2016 (has links)
In this dissertation, I investigate predictors and consequences of transactive memory system (TMS) development. A transactive memory system is a shared system for encoding, storing, and recalling who knows what within a group. Groups with well-developed transactive memory systems typically perform better than groups lacking such memory systems. I study how communication enhances the development of TMS and how turnover disrupts both TMS and its relationship to group performance. More specifically, I examine how communication networks affect the amount of communication, how the structure of the communication network affects the extent to which the group members share a strong identity as a group, and how both of these factors affect a group’s TMS. I also analyze how turnover disrupts the relationship between transactive memory systems and group performance. In addition, I examine how the communication network and turnover interact to affect group performance. I analyze these effects in three laboratory studies. The controlled setting of the experimental laboratory permits me to make causal inferences about the relationship of turnover and the communication network to group outcomes. Results promise to advance theory about transactive memory systems and communication networks.
14

Distributed Execution of Recursive Irregular Applications

Nikhil Hegde (7043171) 13 August 2019 (has links)
Massive computing power and applications running on this power, primarily confined to expensive supercomputers a decade ago, have now become mainstream through the availability of clusters with commodity computers and high-speed interconnects running big-data era applications. The challenges associated with programming such systems, for effectively utilizing the computing power, have led to the creation of intuitive abstractions and implementations targeting average users, domain experts, and savvy (parallel) programmers. There is often a trade-off between the ease of programming and performance when using these abstractions. This thesis develops tools to bridge the gap between ease of programming and performance of irregular programs—programs that involve one or more of irregular- data structures, control structures, and communication patterns—on distributed-memory systems. <div><br></div><div>Irregular programs are focused heavily in domains ranging from data mining to bioinformatics to scientific computing. In contrast to regular applications such as stencil codes and dense matrix-matrix multiplications, which have a predictable pattern of data access and control flow, typical irregular applications operate over graphs, trees, and sparse matrices and involve input-dependent data access pattern and control flow. This makes it difficult to apply optimizations such as those targeting locality and parallelism to programs implementing irregular applications. Moreover, irregular programs are often used with large data sets that prohibit single-node execution due to memory limitations on the node. Hence, distributed solutions are necessary in order to process all the data.<br><br>In this thesis, we introduce SPIRIT, a framework consisting of an abstraction and a space-adaptive runtime system for simplifying the creation of distributed implementations of recursive irregular programs based on spatial acceleration structures. SPIRIT addresses the insufficiency of traditional data-parallel approaches and existing systems in effectively parallelizing computations involving repeated tree traversals. SPIRIT employs locality optimizations applied in a shared-memory context, introduces a novel pipeline-parallel approach to execute distributed traversals, and trades-off performance with memory usage to create a space-adaptive system that achieves a scalable performance, and outperforms implementations done in contemporary distributed graph processing frameworks.</div><div><br>We next introduce Treelogy to understand the connection between optimizations and tree-algorithms. Treelogy provides an ontology and a benchmark suite of a broader class of tree algorithms to help answer: (i) is there any existing optimization that is applicable or effective for a new tree algorithm? (ii) can a new optimization developed for a tree algorithm be applied to existing tree algorithms from other domains? We show that a categorization (ontology) based on structural properties of tree- algorithms is useful for both developers of new optimizations and new tree algorithm creators. With the help of a suite of tree traversal kernels spanning the ontology, we show that GPU, shared-, and distributed-memory implementations are scalable and the two-point correlation algorithm with vptree performs better than the standard kdtree implementation.</div><div><br>In the final part of the thesis, we explore the possibility of automatically generating efficient distributed-memory implementations of irregular programs. As manually creating distributed-memory implementations is challenging due to the explicit need for managing tasks, parallelism, communication, and load-balancing, we introduce a framework, D2P, to automatically generate efficient distributed implementations of recursive divide-conquer algorithms. D2P automatically generates a distributed implementation of a recursive divide-conquer algorithm from its specification, which is a high-level outline of a recursive formulation. We evaluate D2P with recursive Dynamic programming (DP) algorithms. The computation in DP algorithms is not irregular per se. However, when distributed, the computation in efficient recursive formulations of DP algorithms requires irregular communication. User-configurable knobs in D2P allow for tuning the amount of available parallelism. Results show that D2P programs scale well, are significantly better than those produced using a state-of-the-art framework for parallelizing iterative DP algorithms, and outperform even hand-written distributed-memory implementations in most cases.</div>
15

IMPROVING THE PERFORMANCE AND ENERGY EFFICIENCY OF EMERGING MEMORY SYSTEMS

Guo, Yuhua 01 January 2018 (has links)
Modern main memory is primarily built using dynamic random access memory (DRAM) chips. As DRAM chip scales to higher density, there are mainly three problems that impede DRAM scalability and performance improvement. First, DRAM refresh overhead grows from negligible to severe, which limits DRAM scalability and causes performance degradation. Second, although memory capacity has increased dramatically in past decade, memory bandwidth has not kept pace with CPU performance scaling, which has led to the memory wall problem. Third, DRAM dissipates considerable power and has been reported to account for as much as 40% of the total system energy and this problem exacerbates as DRAM scales up. To address these problems, 1) we propose Rank-level Piggyback Caching (RPC) to alleviate DRAM refresh overhead by servicing memory requests and refresh operations in parallel; 2) we propose a high performance and bandwidth efficient approach, called SELF, to breaking the memory bandwidth wall by exploiting die-stacked DRAM as a part of memory; 3) we propose a cost-effective and energy-efficient architecture for hybrid memory systems composed of high bandwidth memory (HBM) and phase change memory (PCM), called Dual Role HBM (DR-HBM). In DR-HBM, hot pages are tracked at a cost-effective way and migrated to the HBM to improve performance, while cold pages are stored at the PCM to save energy.
16

Remembering without storing: beyond archival models in the science and philosophy of human memory

O'Loughlin, Ian 01 July 2014 (has links)
Models of memory in cognitive science and philosophy have traditionally explained human remembering in terms of storage and retrieval. This tendency has been entrenched by reliance on computationalist explanations over the course of the twentieth century; even research programs that eschew computationalism in name, or attempt the revision of traditional models, demonstrate tacit commitment to computationalist assumptions. It is assumed that memory must be stored by means of an isomorphic trace, that memory processes must divide into conceptually distinct systems and phases, and that human remembering consists in inner, cognitive processes that are implemented by distinct neural processes. This dissertation draws on recent empirical work, and on philosophical arguments from Ludwig Wittgenstein and others, to demonstrate that this latent computationalism in the study of memory is problematic, and that it can and should be eliminated. Cognitive psychologists studying memory have encountered numerous data in recent decades that belie archival models. In cognitive neuroscience, establishing the neural basis of storage and retrieval processes has proven elusive. A number of revised models on offer in memory science, that have taken these issues into account, fail to sufficiently extricate the archival framework. Several impasses in memory science are products of these underlying computationalist assumptions. Wittgenstein and other philosophers offer a number of arguments against the need for, and the efficacy of, the storage and retrieval of traces in human remembering. A study of these arguments clarifies the ways that these computationalist assumptions are presently impeding the science of memory, and provides ways forward in removing them. We can and should characterize and model human memory without invoking the storage and retrieval of traces. A range of work in connectionism, dynamical systems theory, and recent philosophical accounts of memory demonstrate how the science of memory can proceed without these assumptions, toward non-archival models of remembering.
17

Architecting heterogeneous memory systems with 3D die-stacked memory

Sim, Jae Woong 21 September 2015 (has links)
The main objective of this research is to efficiently enable 3D die-stacked memory and heterogeneous memory systems. 3D die-stacking is an emerging technology that allows for large amounts of in-package high-bandwidth memory storage. Die-stacked memory has the potential to provide extraordinary performance and energy benefits for computing environments, from data-intensive to mobile computing. However, incorporating die-stacked memory into computing environments requires innovations across the system stack from hardware and software. This dissertation presents several architectural innovations to practically deploy die-stacked memory into a variety of computing systems. First, this dissertation proposes using die-stacked DRAM as a hardware-managed cache in a practical and efficient way. The proposed DRAM cache architecture employs two novel techniques: hit-miss speculation and self-balancing dispatch. The proposed techniques virtually eliminate the hardware overhead of maintaining a multi-megabytes SRAM structure, when scaling to gigabytes of stacked DRAM caches, and improve overall memory bandwidth utilization. Second, this dissertation proposes a DRAM cache organization that provides a high level of reliability for die-stacked DRAM caches in a cost-effective manner. The proposed DRAM cache uses error-correcting code (ECCs), strong checksums (CRCs), and dirty data duplication to detect and correct a wide range of stacked DRAM failures—from traditional bit errors to large-scale row, column, bank, and channel failures—within the constraints of commodity, non-ECC DRAM stacks. With only a modest performance degradation compared to a DRAM cache with no ECC support, the proposed organization can correct all single-bit failures, and 99.9993% of all row, column, and bank failures. Third, this dissertation proposes architectural mechanisms to use large, fast, on-chip memory structures as part of memory (PoM) seamlessly through the hardware. The proposed design achieves the performance benefit of on-chip memory caches without sacrificing a large fraction of total memory capacity to serve as a cache. To achieve this, PoM implements the ability to dynamically remap regions of memory based on their access patterns and expected performance benefits. Lastly, this dissertation explores a new usage model for die-stacked DRAM involving a hybrid of caching and virtual memory support. In the common case where system’s physical memory is not over-committed, die-stacked DRAM operates as a cache to provide performance and energy benefits to the system. However, when the workload’s active memory demands exceed the capacity of the physical memory, the proposed scheme dynamically converts the stacked DRAM cache into a fast swap device to avoid the otherwise grievous performance penalty of swapping to disk.
18

Understanding Multicore Performance : Efficient Memory System Modeling and Simulation

Sandberg, Andreas January 2014 (has links)
To increase performance, modern processors employ complex techniques such as out-of-order pipelines and deep cache hierarchies. While the increasing complexity has paid off in performance, it has become harder to accurately predict the effects of hardware/software optimizations in such systems. Traditional microarchitectural simulators typically execute code 10 000×–100 000× slower than native execution, which leads to three problems: First, high simulation overhead makes it hard to use microarchitectural simulators for tasks such as software optimizations where rapid turn-around is required. Second, when multiple cores share the memory system, the resulting performance is sensitive to how memory accesses from the different cores interleave. This requires that applications are simulated multiple times with different interleaving to estimate their performance distribution, which is rarely feasible with today's simulators. Third, the high overhead limits the size of the applications that can be studied. This is usually solved by only simulating a relatively small number of instructions near the start of an application, with the risk of reporting unrepresentative results. In this thesis we demonstrate three strategies to accurately model multicore processors without the overhead of traditional simulation. First, we show how microarchitecture-independent memory access profiles can be used to drive automatic cache optimizations and to qualitatively classify an application's last-level cache behavior. Second, we demonstrate how high-level performance profiles, that can be measured on existing hardware, can be used to model the behavior of a shared cache. Unlike previous models, we predict the effective amount of cache available to each application and the resulting performance distribution due to different interleaving without requiring a processor model. Third, in order to model future systems, we build an efficient sampling simulator. By using native execution to fast-forward between samples, we reach new samples much faster than a single sample can be simulated. This enables us to simulate multiple samples in parallel, resulting in almost linear scalability and a maximum simulation rate close to native execution. / CoDeR-MP / UPMARC
19

The GraphGrind framework : fast graph analytics on large shared-memory systems

Sun, Jiawen January 2018 (has links)
As shared memory systems support terabyte-sized main memory, they provide an opportunity to perform efficient graph analytics on a single machine. Graph analytics is characterised by frequent synchronisation, which is addressed in part by shared memory systems. However, performance is limited by load imbalance and poor memory locality, which originate in the irregular structure of small-world graphs. This dissertation demonstrates how graph partitioning can be used to optimise (i) load balance, (ii) Non-Uniform Memory Access (NUMA) locality and (iii) temporal locality of graph partitioning in shared memory systems. The developed techniques are implemented in GraphGrind, a new shared memory graph analytics framework. At first, this dissertation shows that heuristic edge-balanced partitioning results in an imbalance in the number of vertices per partition. Thus, load imbalance exists between partitions, either for loops iterating over vertices, or for loops iterating over edges. To address this issue, this dissertation introduces a classification of algorithms to distinguish whether they algorithmically benefit from edge-balanced or vertex-balanced partitioning. This classification supports the adaptation of partitions to the characteristics of graph algorithms. Evaluation in GraphGrind, shows that this outperforms state-of-the-art graph analytics frameworks for shared memory including Ligra by 1.46x on average, and Polymer by 1.16x on average, using a variety of graph algorithms and datasets. Secondly, this dissertation demonstrates that increasing the number of graph partitions is effective to improve temporal locality due to smaller working sets. However, the increasing number of partitions results in vertex replication in some graph data structures. This dissertation resorts to using a graph layout that is immune to vertex replication and an automatic graph traversal algorithm that extends the previously established graph traversal heuristics to a 3-way graph layout choice is designed. This new algorithm furthermore depends upon the classification of graph algorithms introduced in the first part of the work. These techniques achieve an average speedup of 1.79x over Ligra and 1.42x over Polymer. Finally, this dissertation presents a graph ordering algorithm to challenge the widely accepted heuristic to balance the number of edges per partition and minimise edge or vertex cut. This algorithm balances the number of edges per partition as well as the number of unique destinations of those edges. It balances edges and vertices for graphs with a power-law degree distribution. Moreover, this dissertation shows that the performance of graph ordering depends upon the characteristics of graph analytics frameworks, such as NUMA-awareness. This graph ordering algorithm achieves an average speedup of 1.87x over Ligra and 1.51x over Polymer.
20

Estudo e implementação da otimização de Preload de dados usando o processador XScale / Study and implementation of data Preload optimization using XScale

Oliveira, Marcio Rodrigo de 08 October 2005 (has links)
Orientador: Guido Costa Souza Araujo / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-06T14:27:52Z (GMT). No. of bitstreams: 1 Oliveira_MarcioRodrigode_M.pdf: 1563381 bytes, checksum: 52e2e029998b3539a26f5c2b76284d88 (MD5) Previous issue date: 2005 / Resumo: Atualmente existe um grande mercado para o desenvolvimento de aplicações para sistemas embutidos, pois estes estão fazendo parte crescente do cotidiano das pessoas em produtos de eletrônica de consumo como telefones celulares, palmtop's, agendas eletrônicas, etc. Os produtos de eletrônica de consumo possuem grandes restrições de projeto, tais como custo reduzido, baixo consumo de potência e muitas vezes alto desempenho. Deste modo, o código produzido pelos compiladores para os programas executados nestes produtos, devem executar rapidamente, economizando energia de suas baterias. Estes melhoramentos são alcançados através de transformações no programa fonte chamadas de otimizações de código. A otimização preload de dados consiste em mover dados de um alto nível da hierarquia de memória para um baixo nível dessa hierarquia antes deste dado ser usado. Este é um método que pode reduzir a penalidade da latência de memória. Este trabalho mostra o desenvolvimento da otimização de preload de dados no compilador Xingo para a plataforma Pocket PC, cuja arquitetura possui um processador XScale. A arquitetura XScale possui a instrução preload, cujo objetivo é fazer uma pré-busca de dados para a cache. Esta otimização insere (através de previsões) a instrução preload no código intermediário do programa fonte, tentando prever quais dados serão usados e que darão miss na cache (trazendo-os para esta cache antes de seu uso). Com essa estratégia, tenta-se minimizar a porcentagem de misses na cache de dados, reduzindo o tempo gasto em acessos à memória. Foram usados neste trabalho vários programas de benchmarks conhecidos para a avaliação dos resultados, dentre eles destacam-se DSPstone e o MiBench. Os resultados mostram que esta otimização de preload de dados para o Pocket PC produz um aumento considerável de desempenho para a maioria dos programa testados, sendo que em vários programas observou-se uma melhora de desempenho maior que 30%! / Abstract: Nowadays, there is a big market for applications for embedded systems, in products as celIular phones, palmtops, electronic schedulers, etc. Consumer electronics are designed under stringent design constraints, like reduced cost, low power consumption and high performance. This way, the code produced by compiling programs to execute on these products, must execute quickly, and also should save power consumption. In order to achieve that, code optimizations must be performed at compile time. Data preload consists of moving data from a higher leveI of the memory hierarchy to a lower leveI before data is actualIy needed, thus reducing memory latency penalty. This dissertation shows how data preload optimization was implemented into the Xingo compiler for the Pocket PC platform, a XScale based processor. The XScale architecture has a preload instruction, whose main objective is to prefetch program data into cache. This optimization inserts (through heuristics) preload instructions into the program source code, in order to anticipate which data will be used. This strategy minimizes cache misses, allowing to reduce the cache miss latency while running the program code. Some benchmark programs have been used for evaluation, like DSPstone and MiBench. The results show a considerable performance improvement for almost alI tested programs, subject to the preload optimization. Many of the tested programs achieved performance improvements larger than 30% / Mestrado / Otimização de Codigo / Mestre em Ciência da Computação

Page generated in 0.061 seconds