Spelling suggestions: "subject:"sharedmemory"" "subject:"sharememory""
91 |
Software Transactional Memory Building BlocksRiegel, Torvald 13 May 2013 (has links)
Exploiting thread-level parallelism has become a part of mainstream programming in recent years. Many approaches to parallelization require threads executing in parallel to also synchronize occassionally (i.e., coordinate concurrent accesses to shared state). Transactional Memory (TM) is a programming abstraction that provides the concept of database transactions in the context of programming languages such as C/C++. This allows programmers to only declare which pieces of a program synchronize without requiring them to actually implement synchronization and tune its performance, which in turn makes TM typically easier to use than other abstractions such as locks.
I have investigated and implemented the building blocks that are required for a high-performance, practical, and realistic TM. They host several novel algorithms and optimizations for TM implementations, both for current hardware and future hardware extensions for TM, and are being used in or have influenced commercial TM implementations such as the TM support in GCC.
|
92 |
High Performance Network I/O in Virtual Machines over Modern InterconnectsHuang, Wei 12 September 2008 (has links)
No description available.
|
93 |
Secure Communication in a Multi-OS-EnvironmentBathe, Shivraj Gajanan 02 February 2016 (has links) (PDF)
Current trend in automotive industry is moving towards adopting the multicore microcontrollers in Electronic Control Units (ECUs). Multicore microcontrollers give an opportunity to run a number of separated and dedicated operating systems on a single ECU. When two heterogeneous operating systems run in parallel on a multicore environment, the inter OS communication between these operating systems become the key factor in the overall performance. The inter OS communication based on shared memory is studied in this thesis work. In a setup where two operating systems namely EB Autocore OS which is based on AUTomotive Open System Architecture standard and Android are considered. Android being the gateway to the internet and due to its open nature and the increased connectivity features of a connected car, many attack surfaces are introduced to the system. As safety and security go hand in hand, the security aspects of the communication channel are taken into account. A portable prototype for multi OS communication based on shared memory communication with security considerations is developed as a plugin for EB tresos Studio.
|
94 |
An in-situ visualization approach for parallel coupling and steering of simulations through distributed shared memory files / Une approche de visualisation in-situ pour le couplage parallèle et le pilotage de simulations à travers des fichiers en mémoire distribuée partagéeSoumagne, Jérôme 14 December 2012 (has links)
Les codes de simulation devenant plus performants et plus interactifs, il est important de suivre l'avancement d'une simulation in-situ, en réalisant non seulement la visualisation mais aussi l'analyse des données en même temps qu'elles sont générées. Suivre l'avancement ou réaliser le post-traitement des données de simulation in-situ présente un avantage évident par rapport à l'approche conventionnelle consistant à sauvegarder—et à recharger—à partir d'un système de fichiers; le temps et l'espace pris pour écrire et ensuite lire les données à partir du disque est un goulet d'étranglement significatif pour la simulation et les étapes consécutives de post-traitement. Par ailleurs, la simulation peut être arrêtée, modifiée, ou potentiellement pilotée, conservant ainsi les ressources CPU.Nous présentons dans cette thèse une approche de couplage faible qui permet à une simulation de transférer des données vers un serveur de visualisation via l'utilisation de fichiers en mémoire. Nous montrons dans cette étude comment l'interface, implémentée au-dessus d'un format hiérarchique de données (HDF5), nous permet de réduire efficacement le goulet d'étranglement introduit par les I/Os en utilisant des stratégies efficaces de communication et de configuration des données. Pour le pilotage, nous présentons une interface qui permet non seulement la modification de simples paramètres, mais également le remaillage complet de grilles ou des opérations impliquant la régénérationde grandeurs numériques sur le domaine entier de calcul d'être effectués. Cette approche, testée et validée sur deux cas-tests industriels, est suffisamment générique pour qu'aucune connaissance particulière du modèle de données sous-jacent ne soit requise. / As simulation codes become more powerful and more interactive, it is increasingly desirable to monitor a simulation in-situ, performing not only visualization but also analysis of the incoming data as it is generated. Monitoring or post-processing simulation data in-situ has obvious advantage over the conventional approach of saving to—and reloading data from—the file system; the time and space it takes to write and then read the data from disk is a significant bottleneck for both the simulation and subsequent post-processing steps. Furthermore, the simulation may be stopped, modified, or potentially steered, thus conserving CPU resources. We present in this thesis a loosely coupled approach that enables a simulation to transfer data to a visualization server via the use of in-memory files. We show in this study how the interface, implemented on top of a widely used hierarchical data format (HDF5), allows us to efficiently decrease the I/O bottleneck by using efficient communication and data mapping strategies. For steering, we present an interface that allows not only simple parameter changes but also complete re-meshing of grids or operations involving regeneration of field values over the entire computational domain to be carried out. This approach, tested and validated on two industrial test cases, is generic enough so that no particular knowledge of the underlying model is required.
|
95 |
La littérature des Coréens du Japon : la construction d’une nouvelle identité littéraire, sa réalisation et sa remise en cause / Literature by Koreans of Japan : construction of new literary identity, its realization and reconsiderationYoshida, Aki 16 November 2018 (has links)
La littérature des Coréens du Japon ou la littérature des Coréens zainichi (zainichi signifie littéralement « étant au Japon ») obtient une large reconnaissance sur la scène littéraire japonaise à partir de la fin des années 1960, mais l’apparition des premiers écrivains zainichi remonte au lendemain de la Deuxième Guerre mondiale, dans une période marquée par la décolonisation de la Corée et plus tard par la guerre de Corée. Le présent travail consiste à mettre en perspective le processus de construction d’un discours littéraire à caractère identitaire et ce jusqu’à sa mise en question, ainsi qu’à examiner les stratégies esthétiques que mettent en œuvre les écrivains afin de se démarquer de la littérature japonaise. Cette recherche porte principalement sur le travail de trois écrivains majeurs de cette littérature : Kim Tal-su 金達寿(1919-1997), Kim Sŏk-pŏm 金石範(1925-) et Yi Yang-ji李良枝(1955-1992) qui représentent respectivement la période de l’émergence de l’écriture zainichi de 1946 jusqu’au début des années 1950, celle de la reconnaissance d’un statut littéraire spécifique au début des années 1970 et celle du renouvellement tant thématique que narratif des années 1980. Si ces auteurs se distinguent dans leur thématique et dans leur style, ils ont en commun d’inventer une nouvelle écriture dans une situation d’exigence – autrement dit diasporique – où représenter chaque vie et chaque voix singulière peut aussitôt prendre une dimension mémorielle aussi bien que politique. Ainsi, l’évolution de la littérature zainichi est-elle aussi celle de la voix narrative qui se forme et se renouvelle dans cette tension permanente entre le subjectif et le collectif. / Zainichi Korean literature (zainichi literally meaning “to be in Japan”) has met widespread recognition on the Japanese literary scene since the late 1960s. But in fact Korean zainichi writers emerged earlier: in the aftermath of WW2, during the decolonization of Korea and the subsequent Korean War. This dissertation focuses on the construction process of a new literary discourse, intricately linked to the question of identity, but also on the criticism it underwent. Furthermore, this work analyzes the aesthetic strategies used by each author to distance theirs works from Japanese literature. This dissertation focuses on the following three authors: Kim Tal-su (1919-1997), Kim Sŏk-pŏm (1925-) and Yi Yang-ji (1955-1992), who respectively represent a period of development of zainichi literature: the emergence of zainichi writers between 1946 to early 1950, the establishment of a new literary category in the early 1970s, and the thematical and narrative renewal in the 1980s.These authors worked on different themes and wrote in distinct styles, and yet their writings were all born within a complex relationship to their community, as a minority and diaspora. As such, they narrate individual histories, which also carry a memorial and political dimension. Thus, the history of zainichi literature is also a history of individual voices, which emerge from and permanently renew the tension between the subjective and the collective.
|
96 |
SHARED-GM: Arquitetura de Mem´oria Distribu´ıda para o Ambiente D-GM. / SHARED-GM: DISTRIBUTED MEMORY ARCHITECTURE FOR D-GM ENVIROZechlinski, Gustavo Mata 11 September 2010 (has links)
Made available in DSpace on 2016-03-22T17:26:43Z (GMT). No. of bitstreams: 1
gustavo.pdf: 2041277 bytes, checksum: 42156e7b6b140c8fd9dcda43abcba411 (MD5)
Previous issue date: 2010-09-11 / The recent advances in computer technology have increased the use of computer
clusters for running applications which require a large computational effort, making this
practice a strong tendency. Following this tendency, the D-GM (Geometric Distributed-
Machine) environment is a tool, composed by two software modules, VPE-GM (Visual
Programming Environment for Geometric Machine) and VirD-GM (Virtual Distributed
Geometric Machine), whose goals are the development of applications of the scientific
computation applying visual programming and parallel and/or distributed execution, respectively.
The core of the D-GM environment is based on the Geometric Machine (GM
Model), which is an abstract machine model for parallel and/or concurrent computations,
whose definitions cover the existing parallels to process executions.
The main contribution of this work is the formalization and development of a
distributed memory for the D-GM environment, designing, modeling and constructing the
integration between such environment and a distributed shared memory (DSM) system.
Therefore, it aims at obtaining a better execution dynamic with major functionality and
possibly, an increase in performance in the D-GM execution applications.
This integration, whose objective is to supply a shared distributed memory module
to the D-GM environment, is called ShareD-GM environment. Based on the study
of DSM softwares implementations, mainly on their characteristics which meet all the
requirements to implement the distributed memory of the D-GM environment, this work
considers the use of Terracotta system.
This study highlights two facilities both present in Terracota: the portability and
adaptability for distributed execution in a cluster of computers with no code modifications
(codeless clustering).
Besides these characteristics, one can observe that Terracotta does not make use
of RMI (Remote Method Invocation) for communication among objects in a JAVA environment.
From this point of view, one may also minimize the overhead of data serializations
(marshalling) in network transmissions. In addition, the development of applications
to evaluate the implementation of the architecture model provided by the ShareD-GM integration,
as the algorithm Smith-Waterman and the Jacobi method, showed a shorter
running time when compared to the previous VirD-GM execution module / O recente avanc¸o das tecnologias de computadores impulsionaram o uso de
clusters de computadores para execuc¸ ao de aplicac¸ oes que exijam um grande esforc¸o
computacional, tornando esta pr´atica uma forte tend encia atual. Acompanhando esta
tend encia, o Ambiente D-GM (Distributed-Geometric Machine) constitui-se em uma ferramenta
compreendendo dois m´odulos de software, VPE-GM (Visual Programming Environment
for Geometric Machine) e VirD-GM (Virtual Distributed Geometric Machine),
os quais objetivam o desenvolvimento de aplicac¸ oes da computac¸ ao cient´ıfica aplicando
a programac¸ ao visual e a execuc¸ ao paralela e/ou distribu´ıda, respectivamente.
O n´ucleo do Ambiente D-GM est´a fundamentado na M´aquina Geom´etrica (Geometric
Machine-GM), um modelo de m´aquina abstrato para computac¸ oes paralelas e/ou
concorrentes cujas definic¸ oes abrangem os paralelismos existentes para execuc¸ ao de processos.
A principal contribuic¸ ao deste trabalho ´e a formalizac¸ ao e desenvolvimento de
uma mem´oria distribu´ıda para o Ambiente D-GM atrav´es da concepc¸ ao, modelagem e
construc¸ ao da integrac¸ ao entre o Ambiente D-GM e um sistema DSM (Distributes Shared
Memory). Portanto, visando melhoria na din amica de execuc¸ ao com maior funcionalidade
e, possivelmente, com melhor desempenho no ambiente D-GM. A esta integrac¸ ao,
cujo objetivo ´e fornecer um modelo de mem´oria compartilhada distribu´ıda para o Ambiente
D-GM, d´a-se o nome de ShareD-GM. Com base no estudo de implementac¸ oes
em software de DSM e nas caracter´ısticas que atendem aos requisitos de implementac¸ ao
da mem´oria distribu´ıda do Ambiente D-GM, este trabalho considera o uso do sistema
Terracotta. Salientam-se duas facilidades apresentadas pelo Terracota: a portabilidade
e a adaptabilidade para execuc¸ ao distribu´ıda em clusters de computadores com pouca
ou at´e nenhuma modificac¸ ao no c´odigo (codeless clustering), as quais retornam grandes
benef´ıcios quando da integrac¸ ao com aplicac¸ oes JAVA. Al´em disso, verifica-se o fato de
que o Terracotta n ao utiliza RMI (Remote Method Invocation) para comunicac¸ ao entre os
objetos em um Ambiente JAVA. Neste perspectiva, procura-se minimizar o overhead dos
dados produzidos pelas serializac¸ oes (marshalling) nas transmiss oes via rede. P ode-se
tamb´em comprovar durante o desenvolvimento de testes de avaliac¸ ao da implementac¸ ao
da arquitetura proporcionada pela integrac¸ ao ShareD-GM, que a execuc¸ ao de aplicac¸ oes
modeladas no Ambiente D-GM, como o algoritmo de Smith-Waterman e o m´etodo de Jacobi,
apresentaram menor tempo de execuc¸ ao quando comparados com a implementac¸ ao
anterior, no m´odulo VirD-GM de execuc¸ ao do Ambiente D-GM
|
97 |
Um ambiente de execução para suporte à programação paralela com variáveis compartilhadas em sistemas distribuídos heterogêneos. / A runtime system for parallel programing with shared memory paradigm over a heterogeneus distributed systems.Craveiro, Gisele da Silva 31 October 2003 (has links)
O avanço na tecnologia de hardware está permitindo que máquinas SMP de 2 a 8 processadores estejam disponíveis a um custo cada vez menor, possibilitando que a incorporação de tais máquinas em aglomerados de PC's ou até mesmo a composição de um aglomerado de SMP's sejam alternativas cada vez mais viáveis para computação de alto desempenho. O grande desafio é extrair o potencial que tal conjunto de máquinas oferece. Uma alternativa é usar um paradigma híbrido de programação para aproveitar a arquitetura de memória compartilhada através de multihreadeing e utilizar o modelo de troca de mensagens para comunicação entre os nós. Contudo, essa estratégia impõe uma tarefa árdua e pouco produtiva para o programador da aplicação. Este trabalho apresenta o sistema CPAR- Cluster que oferece uma abstração de memória compartilhada no topo de um aglomerado formado por nós mono e multiprocessadores. O sistema é implementado no nível de biblioteca e não faz uso de recursos especiais tais como hardware especializado ou alteração na camada de sistema operacional. Serão apresentados os modelos, estratégias, questões de implementação e os resultados obtidos através de testes realizados com a ferramenta e que apresentaram comportamento esperado. / The advance in hardware technologies is making small configuration SMP machines (from 2 to 8 processors) available at a low cost. For this reason, the inclusion of an SMP node into a cluster of PCs or even clusters of SMPs are becoming viable alternatives for high performance computing. The challenge is the exploitation of the computational resources that these platforms provide. A Hybrid programming paradigm which uses shared memory architecture through multihreading and also message passing model for inter node communication is an alternative. However, programming in such paradigm is very hard. This thesis presents CPAR- Cluster, a runtime system, that provides shared memory abstraction on top of a cluster composed by mono and multiprocessor nodes. Its implementation is at the library level and doesn't require special resources such as particular hardware or operating system moditfications. Models, strategies, implementation aspects and results will be presented.
|
98 |
Eidolon: adapting distributed applications to their environment.Potts, Daniel Paul, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
Grids, multi-clusters, NUMA systems, and ad-hoc collections of distributed computing devices all present diverse environments in which distributed computing applications can be run. Due to the diversity of features provided by these environments a distributed application that is to perform well must be specifically designed and optimised for the environment in which it is deployed. Such optimisations generally affect the application's communication structure, its consistency protocols, and its communication protocols. This thesis explores approaches to improving the ability of distributed applications to share consistent data efficiently and with improved functionality over wide-area and diverse environments. We identify a fundamental separation of concerns for distributed applications. This is used to propose a new model, called the view model, which is a hybrid, cost-conscious approach to remote data sharing. It provides the necessary mechanisms and interconnects to improve the flexibility and functionality of data sharing without defining new programming models or protocols. We employ the view model to adapt distributed applications to their run-time environment without modifying the application or inventing new consistency or communication protocols. We explore the use of view model properties on several programming models and their consistency protocols. In particular, we focus on programming models used in distributed-shared-memory middleware and applications, as these can benefit significantly from the properties of the view model. Our evaluation demonstrates the benefits, side effects and potential short-comings of the view model by comparing our model with traditional models when running distributed applications across several multi-clusters scenarios. In particular, we show that the view model improves the performance of distributed applications while reducing resource usage and communication overheads.
|
99 |
Software Techniques for Distributed Shared MemoryRadovic, Zoran January 2005 (has links)
<p>In large multiprocessors, the access to shared memory is often nonuniform, and may vary as much as ten times for some distributed shared-memory architectures (DSMs). This dissertation identifies another important nonuniform property of DSM systems: <i>nonuniform communication architecture</i>, NUCA. High-end hardware-coherent machines built from large nodes, or from chip multiprocessors, are typical NUCA systems, since they have a lower penalty for reading recently written data from a neighbor's cache than from a remote cache. This dissertation identifies <i>node affinity</i> as an important property for scalable general-purpose locks. Several software-based hierarchical lock implementations exploiting NUCAs are presented and evaluated. NUCA-aware locks are shown to be almost twice as efficient for contended critical sections compared to traditional lock implementations.</p><p>The shared-memory “illusion”' provided by some large DSM systems may be implemented using either hardware, software or a combination thereof. A software-based implementation can enable cheap cluster hardware to be used, but typically suffers from poor and unpredictable performance characteristics.</p><p>This dissertation advocates a new software-hardware trade-off design point based on a new combination of techniques. The two low-level techniques, fine-grain deterministic coherence and synchronous protocol execution, as well as profile-guided protocol flexibility, are evaluated in isolation as well as in a combined setting using all-software implementations. Finally, a minimum of hardware trap support is suggested to further improve the performance of coherence protocols across cluster nodes. It is shown that all these techniques combined could result in a fairly stable performance on par with hardware-based coherence.</p>
|
100 |
The System-on-a-Chip Lock CacheAkgul, Bilge Ebru Saglam 12 April 2004 (has links)
In this dissertation, we implement efficient lock-based synchronization by
a novel, high performance, simple and scalable hardware technique and
associated software for a target shared-memory multiprocessor
System-on-a-Chip (SoC). The custom hardware part of our solution is
provided in the form of an intellectual property (IP) hardware unit which
we call the SoC Lock Cache (SoCLC). SoCLC provides effective lock hand-off
by reducing on-chip memory traffic and improving performance in terms of
lock latency, lock delay and bandwidth consumption. The proposed solution
is independent from the memory hierarchy, cache protocol and the processor
architectures used in the SoC, which enables easily applicable
implementations of the SoCLC (e.g., as a reconfigurable or partially/fully
custom logic), and which distinguishes SoCLC from previous approaches.
Furthermore, the SoCLC mechanism has been extended to support priority
inheritance with an immediate priority ceiling protocol (IPCP) implemented
in hardware, which enhances the hard real-time performance of the system.
Our experimental results in a four-processor SoC indicate that SoCLC can
achieve up to 37% overall speedup over spin-lock and up to 48% overall
speedup over MCS for a microbenchmark with false sharing. The priority
inheritance implemented as part of the SoCLC hardware, on the other hand,
achieves 1.43X speedup in overall execution time of a robot application
when compared to the priority inheritance implementation under the
Atalanta real-time operating system. Furthermore, it has been shown that
with the IPCP mechanism integrated into the SoCLC, all of the tasks of the
robot application could meet their deadlines (e.g., a high priority task
with 250us worst case response time could complete its execution in 93us
with SoCLC, however the same task missed its deadline by completing its
execution in 283us without SoCLC). Therefore, with IPCP support, our
solution can provide better real-time guarantees for real-time systems.
To automate SoCLC design, we have also developed an SoCLC-generator tool,
PARLAK, that generates user specified configurations of a custom SoCLC. We
used PARLAK to generate SoCLCs from a version for two processors with 32
lock variables occupying 2,520 gates up to a version for fourteen
processors with 256 lock variables occupying 78,240 gates.
|
Page generated in 0.077 seconds