• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 5
  • 5
  • 4
  • 1
  • Tagged with
  • 47
  • 47
  • 14
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Collaborative information technology moderation in dynamic teamwork with team member departure

Keskin, Tayfun 20 October 2010 (has links)
The objective of this dissertation study is to provide the theoretical foundation for collaborative information technology moderation on team performance and give empirical evidence to support this relationship. The model provided in this study is supported by analytical proofs for the proposed hypotheses to define relationships among constructs in this research including departure (reduction in the number of team members), collaborative information technology functionality, transactive memory strength, and team performance. This research offers a theory that utilizes transactive memory systems (TMS) to examine the departure problem. The main research question is: Can collaborative information technologies (CIT) alleviate negative effects of departure? The theory in this study is structured around the indicators of TMS: specialization, coordination, and credibility. Findings showed that CIT functionality level plays a role in enhancing the group performance. This role is not direct but instead, is a moderation effect that alleviates the negative departure impact. In absence of departure, CIT impact can be confusing as it can be either positive or negative. My analytical results explain why information systems literature has had conflicting arguments on the role of technology. I propose that particular dynamic events and incidents, such as employee departure, help us understand the impact of CIT more clearly. Moreover, I employ transactive memory theory to explain how individuals develop and exchange knowledge in a group and how skills and knowledge can be lost due to departure. I also explain why and how team performance benefits from CIT when departure occurs. / text
12

MULTIPLE MEMORY SYSTEMS IN PEOPLE WITH SCHIZOPHRENIA: POSSIBLE EFFECT OF ATYPICAL ANTI-PSYCHOTIC MEDICATIONS

Steel, RYLAND 23 July 2013 (has links)
Patients with schizophrenia are normally treated with one of several antipsychotic medications that differ from one another in the areas of the brain they affect including the dorsal striatum, a subcortical section of the forebrain, and the prefrontal cortex (PFC), located in the anterior part of the frontal lobes. Two different tests of implicit memory, the probabilistic classification learning (PCL) and the Iowa gambling task (IGT), have been shown to rely on the dorsal striatum and the PFC, respectively. Studies have previously shown that patients with schizophrenia treated with antipsychotics that affect the dorsal striatum (e.g., risperidone), have altered performance on the PCL, and those treated with antipsychotics that affect the PFC (e.g., clozapine), have altered performance on the IGT. We tested the hypothesis that patients with schizophrenia treated with olanzapine would have a poorer performance on the IGT, but not the PCL, when compared with controls. This study aimed to clarify conflicting results from prior experiments observing the effects of olanzapine on implicit memory in people with schizophrenia. We also hypothesized that performance of patients taking aripiprazole would be comparable to those taking risperidone, or an FGA; however, we were unable to recruit a sufficient amount of participants to test this hypothesis. Patients with schizophrenia, a mental disorder characterized by a breakdown in relation between thoughts, emotion, and behavior, treated with olanzapine were recruited through local psychiatric clinics or using a newspaper ad. Administration of the Brief Psychiatric Rating Scale (BPRS) and the Mini Mental State Examination (MMSE) preceded a brief questionnaire of demographic information. Participants were tested on the PCL and the IGT using a personal computer. Results revealed poorer performance on both the MMSE and BPRS for patients when compared with controls. Patients taking olanzapine were impaired in learning the PCL but not the IGT when compared with controls. Results suggest that olanzapine acts on the PFC to augment IGT performance but further studies are needed. / Thesis (Master, Neuroscience Studies) -- Queen's University, 2013-07-23 15:09:21.55
13

L’adolescence, une période de vulnérabilité aux effets de régimes obésogènes sur la mémoire : études des fonctions hippocampiques et amygdaliennes / Adolescence, a vulnerable period to the effects of obesogenic diets on memory : Special emphasis on hippocampal and amygdala systems

Boitard, Chloé 13 December 2013 (has links)
L’obésité, considérée comme pandémique, est associée à l’apparition de troubles cognitifs et émotionnels chez l’Homme comme chez l’animal. La prévalence de l’obésité augmente de manière drastique chez les enfants et les adolescents. Or l’adolescence est une période primordiale pour la maturation des structures cérébrales (notamment l’hippocampe et l’amygdale) qui vont sous-tendre les processus cognitifs pour le restant de la vie de l’individu. Cependant, aucune étude n’avait investigué la potentielle vulnérabilité de cette période développementale aux effets de l’obésité sur la mémoire, comparativement à l’âge adulte. Nous avons donc effectué cette comparaison chez le rongeur, en modélisant l’obésité par une exposition à un régime hyper-lipidique (HL) pendant une période incluant l’adolescence versus à l’âge adulte uniquement (i.e. excluant l’adolescence). Nous mettons en évidence que l’obésité induite à l’adolescence provoque des altérations mnésiques, qui ne sont pas retrouvés lorsque l’obésité est induite à l’âge adulte. La majorité des études sur les effets de l’obésité ayant mis en évidence une altération des mémoires dépendantes de l’hippocampe, nous nous sommes tout d’abord focalisés sur les fonctions hippocampiques. Nous avons ensuite exploré le système amygdalien, impliqué dans les mémoires émotionnelles et peu étudié dans le cadre de l’obésité. Ces deux systèmes fonctionnels ont été appréhendés au travers d’approches comportementales visant à évaluer les performances mnésiques, mais également d’approches d’imagerie cellulaire et d’électrophysiologie afin d’évaluer la plasticité cellulaire au sein de ces structures. Nous mettons en évidence que l’obésité induite à l’adolescence impacte la mémoire et la plasticité de ces systèmes de manière bidirectionnelle en dégradant les fonctions hippocampiques et en exacerbant les fonctions amygdaliennes. Concernant les mécanismes impliqués dans ces effets nous mettons en évidence l’existence d’une exacerbation de la réponse inflammatoire spécifiquement au niveau de l’hippocampe chez les animaux exposés au régime HL à l’adolescence, ce qui pourrait expliquer les déficits des fonctions hippocampiques. Enfin, nous montrons que la dérégulation de l’axe corticotrope chez ces animaux est responsable des effets comportementaux et cellulaires observés au niveau des fonctions amygdaliennes. L’ensemble de ces résultats montre l’urgence de développer les études sur l’obésité juvénile, dont les effets importants sur les fonctions cognitives et émotionnelles pourraient engendrer une altération importante de la qualité de vie et une prise en charge accrue de ces sujets tout au long de leur vie. / The obesity pandemic is linked to cognitive and emotional disorders in humans. Obesity prevalence in adolescence is increasing at an alarming rate. Adolescence is a crucial period for maturation of brain structures, like hippocampus and amygdala, particularly important for neurocognitive shaping required for whole life duration. However, no study investigated a potential higher vulnerability of this specific developmental period to the effects of obesity on memory. For this purpose we compared, in rodent models, the effects of high-fat diet (HFD)-induced obesity during adolescence (from weaning to adulthood) or at adulthood. We were able to demonstrate that adolescence is more vulnerable than adulthood to the effects of obesity on memory. Most of the studies interested in the effects of obesity on memory found deficits in hippocampal dependent memories. We therefore first focused on hippocampal functioning. We also extended this investigation to another memory system depending on amygdala, since little was known about the effects of obesity on this structure. Using behavioral approaches to evaluate memory performances, but also cellular imaging and electrophysiology to assess cellular plasticity, we evidenced that juvenile obesity affects both memory and plasticity in a bidirectional way, impairing hippocampal function and enhancing amygdala function. Looking for mechanisms to explain these effects, we found a potentiated inflammatory response specifically in the hippocampus that could explain decreased hippocampal function. We also demonstrated that deregulation of hypothalamo-pituitary-adrenal axis is responsible for increased amygdala plasticity and memory. Altogether these results suggest that obesity during adolescence predisposes to later maladaptive cognitive and emotional functions. This is a major concern as it could induce a significant impairment in quality of life of these individuals and contribute considerably to their social and occupational dysfunction.
14

The Influence of Communication Networks and Turnover on Transactive Memory Systems and Team Performance

Kush, Jonathan 01 May 2016 (has links)
In this dissertation, I investigate predictors and consequences of transactive memory system (TMS) development. A transactive memory system is a shared system for encoding, storing, and recalling who knows what within a group. Groups with well-developed transactive memory systems typically perform better than groups lacking such memory systems. I study how communication enhances the development of TMS and how turnover disrupts both TMS and its relationship to group performance. More specifically, I examine how communication networks affect the amount of communication, how the structure of the communication network affects the extent to which the group members share a strong identity as a group, and how both of these factors affect a group’s TMS. I also analyze how turnover disrupts the relationship between transactive memory systems and group performance. In addition, I examine how the communication network and turnover interact to affect group performance. I analyze these effects in three laboratory studies. The controlled setting of the experimental laboratory permits me to make causal inferences about the relationship of turnover and the communication network to group outcomes. Results promise to advance theory about transactive memory systems and communication networks.
15

Distributed Execution of Recursive Irregular Applications

Nikhil Hegde (7043171) 13 August 2019 (has links)
Massive computing power and applications running on this power, primarily confined to expensive supercomputers a decade ago, have now become mainstream through the availability of clusters with commodity computers and high-speed interconnects running big-data era applications. The challenges associated with programming such systems, for effectively utilizing the computing power, have led to the creation of intuitive abstractions and implementations targeting average users, domain experts, and savvy (parallel) programmers. There is often a trade-off between the ease of programming and performance when using these abstractions. This thesis develops tools to bridge the gap between ease of programming and performance of irregular programs—programs that involve one or more of irregular- data structures, control structures, and communication patterns—on distributed-memory systems. <div><br></div><div>Irregular programs are focused heavily in domains ranging from data mining to bioinformatics to scientific computing. In contrast to regular applications such as stencil codes and dense matrix-matrix multiplications, which have a predictable pattern of data access and control flow, typical irregular applications operate over graphs, trees, and sparse matrices and involve input-dependent data access pattern and control flow. This makes it difficult to apply optimizations such as those targeting locality and parallelism to programs implementing irregular applications. Moreover, irregular programs are often used with large data sets that prohibit single-node execution due to memory limitations on the node. Hence, distributed solutions are necessary in order to process all the data.<br><br>In this thesis, we introduce SPIRIT, a framework consisting of an abstraction and a space-adaptive runtime system for simplifying the creation of distributed implementations of recursive irregular programs based on spatial acceleration structures. SPIRIT addresses the insufficiency of traditional data-parallel approaches and existing systems in effectively parallelizing computations involving repeated tree traversals. SPIRIT employs locality optimizations applied in a shared-memory context, introduces a novel pipeline-parallel approach to execute distributed traversals, and trades-off performance with memory usage to create a space-adaptive system that achieves a scalable performance, and outperforms implementations done in contemporary distributed graph processing frameworks.</div><div><br>We next introduce Treelogy to understand the connection between optimizations and tree-algorithms. Treelogy provides an ontology and a benchmark suite of a broader class of tree algorithms to help answer: (i) is there any existing optimization that is applicable or effective for a new tree algorithm? (ii) can a new optimization developed for a tree algorithm be applied to existing tree algorithms from other domains? We show that a categorization (ontology) based on structural properties of tree- algorithms is useful for both developers of new optimizations and new tree algorithm creators. With the help of a suite of tree traversal kernels spanning the ontology, we show that GPU, shared-, and distributed-memory implementations are scalable and the two-point correlation algorithm with vptree performs better than the standard kdtree implementation.</div><div><br>In the final part of the thesis, we explore the possibility of automatically generating efficient distributed-memory implementations of irregular programs. As manually creating distributed-memory implementations is challenging due to the explicit need for managing tasks, parallelism, communication, and load-balancing, we introduce a framework, D2P, to automatically generate efficient distributed implementations of recursive divide-conquer algorithms. D2P automatically generates a distributed implementation of a recursive divide-conquer algorithm from its specification, which is a high-level outline of a recursive formulation. We evaluate D2P with recursive Dynamic programming (DP) algorithms. The computation in DP algorithms is not irregular per se. However, when distributed, the computation in efficient recursive formulations of DP algorithms requires irregular communication. User-configurable knobs in D2P allow for tuning the amount of available parallelism. Results show that D2P programs scale well, are significantly better than those produced using a state-of-the-art framework for parallelizing iterative DP algorithms, and outperform even hand-written distributed-memory implementations in most cases.</div>
16

IMPROVING THE PERFORMANCE AND ENERGY EFFICIENCY OF EMERGING MEMORY SYSTEMS

Guo, Yuhua 01 January 2018 (has links)
Modern main memory is primarily built using dynamic random access memory (DRAM) chips. As DRAM chip scales to higher density, there are mainly three problems that impede DRAM scalability and performance improvement. First, DRAM refresh overhead grows from negligible to severe, which limits DRAM scalability and causes performance degradation. Second, although memory capacity has increased dramatically in past decade, memory bandwidth has not kept pace with CPU performance scaling, which has led to the memory wall problem. Third, DRAM dissipates considerable power and has been reported to account for as much as 40% of the total system energy and this problem exacerbates as DRAM scales up. To address these problems, 1) we propose Rank-level Piggyback Caching (RPC) to alleviate DRAM refresh overhead by servicing memory requests and refresh operations in parallel; 2) we propose a high performance and bandwidth efficient approach, called SELF, to breaking the memory bandwidth wall by exploiting die-stacked DRAM as a part of memory; 3) we propose a cost-effective and energy-efficient architecture for hybrid memory systems composed of high bandwidth memory (HBM) and phase change memory (PCM), called Dual Role HBM (DR-HBM). In DR-HBM, hot pages are tracked at a cost-effective way and migrated to the HBM to improve performance, while cold pages are stored at the PCM to save energy.
17

Remembering without storing: beyond archival models in the science and philosophy of human memory

O'Loughlin, Ian 01 July 2014 (has links)
Models of memory in cognitive science and philosophy have traditionally explained human remembering in terms of storage and retrieval. This tendency has been entrenched by reliance on computationalist explanations over the course of the twentieth century; even research programs that eschew computationalism in name, or attempt the revision of traditional models, demonstrate tacit commitment to computationalist assumptions. It is assumed that memory must be stored by means of an isomorphic trace, that memory processes must divide into conceptually distinct systems and phases, and that human remembering consists in inner, cognitive processes that are implemented by distinct neural processes. This dissertation draws on recent empirical work, and on philosophical arguments from Ludwig Wittgenstein and others, to demonstrate that this latent computationalism in the study of memory is problematic, and that it can and should be eliminated. Cognitive psychologists studying memory have encountered numerous data in recent decades that belie archival models. In cognitive neuroscience, establishing the neural basis of storage and retrieval processes has proven elusive. A number of revised models on offer in memory science, that have taken these issues into account, fail to sufficiently extricate the archival framework. Several impasses in memory science are products of these underlying computationalist assumptions. Wittgenstein and other philosophers offer a number of arguments against the need for, and the efficacy of, the storage and retrieval of traces in human remembering. A study of these arguments clarifies the ways that these computationalist assumptions are presently impeding the science of memory, and provides ways forward in removing them. We can and should characterize and model human memory without invoking the storage and retrieval of traces. A range of work in connectionism, dynamical systems theory, and recent philosophical accounts of memory demonstrate how the science of memory can proceed without these assumptions, toward non-archival models of remembering.
18

Mitigating DRAM complexities through coordinated scheduling policies

Stuecheli, Jeffrey Adam 04 June 2012 (has links)
Contemporary DRAM systems have maintained impressive scaling by managing a careful balance between performance, power, and storage density. In achieving these goals, a significant sacrifice has been made in DRAM's operational complexity. To realize good performance, systems must properly manage the significant number of structural and timing restrictions of the DRAM devices. DRAM's efficient use is further complicated in many-core systems where the memory interface has to be shared among multiple cores/threads competing for memory bandwidth. In computer architecture, caches have primarily been viewed as a means to hide memory latency from the CPU. Cache policies have focused on anticipating the CPU's data needs, and are mostly oblivious to the main memory. This work demonstrates that the era of many-core architectures has created new main memory bottlenecks, and mandates a new approach: coordination of cache policy with main memory characteristics. Using the cache for memory optimization purposes dramatically expands the memory controller's visibility of processor behavior, at low implementation overhead. Through memory-centric modification of existing policies, such as scheduled writebacks, this work demonstrates that performance-limiting effects of highly-threaded architectures combined with complex DRAM operation can be overcome. This work shows that an awareness of the physical main memory layout and by focusing on writes, both read and write average latency can be shortened, memory power reduced, and overall system performance improved. The use of the "Page-Mode" feature of DRAM devices can mitigate many DRAM constraints. Current open-page policies attempt to garner the highest level of page hits. In an effort to achieve this, such greedy schemes map sequential address sequences to a single DRAM resource. This non-uniform resource usage pattern introduces high levels of conflict when multiple workloads in a many-core system map to the same set of resources. This work presents a scheme that provides a careful balance between the benefits (increased performance and decreased power), and the detractors (unfairness) of page-mode accesses. In the proposed Minimalist approach, the system targets "just enough" page-mode accesses to garner page-mode benefits, avoiding system unfairness. This is accomplished with the use of a fair memory hashing scheme to control the maximum number of page mode hits. High density memory is becoming ever more important as many execution streams are consolidated onto single chip many-core processors. DRAM is ubiquitous as a main memory technology, but while DRAM's per-chip density and frequency continue to scale, the time required to refresh its dynamic cells has grown at an alarming rate. This work shows how currently-employed methods to schedule refresh operations are ineffective in mitigating the significant performance degradation caused by longer refresh times. Current approaches are deficient -- they do not effectively exploit the flexibility of DRAMs to postpone refresh operations. This work proposes dynamically reconfigurable predictive mechanisms that exploit the full dynamic range allowed in the industry standard DRAM memory specifications. The proposed mechanisms are shown to mitigate much of the penalties seen with dense DRAM devices. In summary this work presents a significant improvement in the ability to exploit the capabilities of high density, high frequency, DRAM devices in a many-core environment. This is accomplished though coordination of previously disparate system components, exploiting integration of such components into highly integrated system designs. / text
19

Architecting heterogeneous memory systems with 3D die-stacked memory

Sim, Jae Woong 21 September 2015 (has links)
The main objective of this research is to efficiently enable 3D die-stacked memory and heterogeneous memory systems. 3D die-stacking is an emerging technology that allows for large amounts of in-package high-bandwidth memory storage. Die-stacked memory has the potential to provide extraordinary performance and energy benefits for computing environments, from data-intensive to mobile computing. However, incorporating die-stacked memory into computing environments requires innovations across the system stack from hardware and software. This dissertation presents several architectural innovations to practically deploy die-stacked memory into a variety of computing systems. First, this dissertation proposes using die-stacked DRAM as a hardware-managed cache in a practical and efficient way. The proposed DRAM cache architecture employs two novel techniques: hit-miss speculation and self-balancing dispatch. The proposed techniques virtually eliminate the hardware overhead of maintaining a multi-megabytes SRAM structure, when scaling to gigabytes of stacked DRAM caches, and improve overall memory bandwidth utilization. Second, this dissertation proposes a DRAM cache organization that provides a high level of reliability for die-stacked DRAM caches in a cost-effective manner. The proposed DRAM cache uses error-correcting code (ECCs), strong checksums (CRCs), and dirty data duplication to detect and correct a wide range of stacked DRAM failures—from traditional bit errors to large-scale row, column, bank, and channel failures—within the constraints of commodity, non-ECC DRAM stacks. With only a modest performance degradation compared to a DRAM cache with no ECC support, the proposed organization can correct all single-bit failures, and 99.9993% of all row, column, and bank failures. Third, this dissertation proposes architectural mechanisms to use large, fast, on-chip memory structures as part of memory (PoM) seamlessly through the hardware. The proposed design achieves the performance benefit of on-chip memory caches without sacrificing a large fraction of total memory capacity to serve as a cache. To achieve this, PoM implements the ability to dynamically remap regions of memory based on their access patterns and expected performance benefits. Lastly, this dissertation explores a new usage model for die-stacked DRAM involving a hybrid of caching and virtual memory support. In the common case where system’s physical memory is not over-committed, die-stacked DRAM operates as a cache to provide performance and energy benefits to the system. However, when the workload’s active memory demands exceed the capacity of the physical memory, the proposed scheme dynamically converts the stacked DRAM cache into a fast swap device to avoid the otherwise grievous performance penalty of swapping to disk.
20

Understanding Multicore Performance : Efficient Memory System Modeling and Simulation

Sandberg, Andreas January 2014 (has links)
To increase performance, modern processors employ complex techniques such as out-of-order pipelines and deep cache hierarchies. While the increasing complexity has paid off in performance, it has become harder to accurately predict the effects of hardware/software optimizations in such systems. Traditional microarchitectural simulators typically execute code 10 000×–100 000× slower than native execution, which leads to three problems: First, high simulation overhead makes it hard to use microarchitectural simulators for tasks such as software optimizations where rapid turn-around is required. Second, when multiple cores share the memory system, the resulting performance is sensitive to how memory accesses from the different cores interleave. This requires that applications are simulated multiple times with different interleaving to estimate their performance distribution, which is rarely feasible with today's simulators. Third, the high overhead limits the size of the applications that can be studied. This is usually solved by only simulating a relatively small number of instructions near the start of an application, with the risk of reporting unrepresentative results. In this thesis we demonstrate three strategies to accurately model multicore processors without the overhead of traditional simulation. First, we show how microarchitecture-independent memory access profiles can be used to drive automatic cache optimizations and to qualitatively classify an application's last-level cache behavior. Second, we demonstrate how high-level performance profiles, that can be measured on existing hardware, can be used to model the behavior of a shared cache. Unlike previous models, we predict the effective amount of cache available to each application and the resulting performance distribution due to different interleaving without requiring a processor model. Third, in order to model future systems, we build an efficient sampling simulator. By using native execution to fast-forward between samples, we reach new samples much faster than a single sample can be simulated. This enables us to simulate multiple samples in parallel, resulting in almost linear scalability and a maximum simulation rate close to native execution. / CoDeR-MP / UPMARC

Page generated in 0.0652 seconds