Spelling suggestions: "subject:"demory models"" "subject:"amemory models""
1 |
Verification of Software under Relaxed MemoryLeonardsson, Carl January 2016 (has links)
The work covered in this thesis concerns automatic analysis of correctness of parallel programs running under relaxed memory models. When a parallel program is compiled and executed on a modern architecture, various optimizations may cause it to behave in unexpected ways. In particular, accesses to the shared memory may appear in the execution in the opposite order to how they appear in the control flow of the original program source code. The memory model determines which memory accesses can be reordered in a program on a given system. Any memory model that allows some observable memory access reordering is called a relaxed memory model. The reorderings may cause bugs and make the production of parallel programs more difficult. In this work, we consider three main approaches to analysis of correctness of programs running under relaxed memory models. An exact analysis for finite state programs running under the TSO memory model (Paper I). This technique is based on the well quasi ordering framework. An over-approximate analysis for integer programs running under TSO (Paper II), based on predicate abstraction combined with a buffer abstraction. Two under-approximate analysis techniques for programs running under the TSO, PSO or POWER memory models (Papers III and IV). The latter two techniques are based on stateless model checking and dynamic partial order reduction. In addition to determining whether a program is correct under a given memory model, the problem of automatic fence synthesis is also considered. A memory fence is an instruction that can be inserted into a program in order to locally disable some memory access reorderings. The fence synthesis problem is the problem of automatically inferring a minimal set of memory fences which restores sufficient order in a given program to ensure its correctness. / UPMARC
|
2 |
Aspects of practical implementations of PRAM algorithmsRavindran, Somasundaram January 1993 (has links)
No description available.
|
3 |
Weak-memory local reasoningWehrman, Ian Anthony 15 February 2013 (has links)
Program logics are formal logics designed to facilitate specification and correctness reasoning for software programs. Separation logic, a recent program logic for C-like programs, has found great success in automated verification due in large part to its embodiment of the principle of local reasoning, in which specifications and proofs are restricted to just those resources—variables, shared memory addresses, locks, etc.—used by the program during execution.
Existing program logics make the strong assumption that all threads agree on the values of shared memory at all times. But, on modern computer architectures, this assumption is unsound for certain shared-memory concurrent programs: namely, those with races. Typically races are considered to be errors, but some programs, like lock-free concurrent data structures, are necessarily racy. Verification of these difficult programs must take into account the weaker models of memory provided by the architectures on which they execute.
This dissertation project seeks to explicate a local reasoning principle for x86-like architectures. The principle is demonstrated with a new program logic for concurrent C-like programs that incorporates ideas from separation logic. The goal of the logic is to allow verification of racy programs like concurrent data structures for which no general-purpose high-level verification techniques exist. / text
|
4 |
A no-thin-air memory model for programming languagesPichon-Pharabod, Jean Yves Alexis January 2018 (has links)
Many hardware and compiler optimisations introduced to speed up single-threaded programs also introduce additional, sometimes surprising, behaviours for concurrent programs with shared mutable state. How many of these extra behaviours occur in practice depends on the combination of the hardware, compiler, runtime, etc. that make up the platform. A memory model, which prescribes what values each read of a concurrent program can read, allows programmers to determine whether a program behaves as expected without having to worry about the details of the platform. However, capturing these behaviours in a memory model without also including undesirable "out-of-thin-air" behaviours that do not occur in practice has proved elusive. The memory model of C and C++ allows out-of-thin-air behaviour, while the Java memory model fails to capture some behaviours that are introduced in practice by compiler optimisations. In this thesis, we propose a memory model that forbids out-of-thin-air behaviour, yet allows the behaviours that do occur. Our memory model follows operational intuitions of how the hardware and compilers operate. We illustrate that it behaves as desired on a series of litmus tests. We show that it captures at least some of the expected behaviours, that it forms an envelope around some common compiler optimisations, and that it is implementable on common hardware using the expected compilation schemes. We also show that it supports some established programming idioms.
|
5 |
Bayesian Models of Sequential Dependencies in Binary and Multi-Interval Response TasksAnnis, Jeffrey Scott 09 July 2014 (has links)
A sequential dependency occurs when the response on the current trial is correlated with responses made on prior trials. Sequential dependencies have been observed in a variety of both perception and memory tasks. Thus, sequential dependencies provide a platform for relating these two cognitive processes. However, there are many issues associated with measuring sequential dependencies and therefore it is necessary to develop measurement models that directly address them. Here, several measurement models of sequential dependencies for both binary and multi-interval response tasks are described. The efficacy of the models is verified by applying them to simulated data sets with known properties. Lastly, the models are then applied to real-world data sets which test the critical assumption that the underlying processes of sequential dependencies are modulated by attention. The models reveal increased vigilance during testing decreases the degree of sequential dependencies.
|
6 |
Item noise versus context noise: using the list length effect to investigate the source of interference in recognition memory.Kinnell, Angela January 2009 (has links)
The present thesis aimed to investigate the source of interference in recognition memory. There are two primary alternatives – the item noise approach, in which interference comes about as a consequence of the other items on the study list, and the context noise approach, wherein interference arises from the previous contexts in which an item has been encountered. Alternatively, interference may occur through a combination of both item and context noise. There are many mathematical models designed to simulate the recognition process that incorporate either item or context noise, or both. Item noise models predict a significant list length effect, that is, that memory for an item that was part of a short list at study is better than that for an item that was part of a long list. Context noise models no not predict a significant difference in memory based on the length of the study list. The presence or absence of the list length effect can therefore be used as a mechanism by which to differentiate item and context noise models. The list length effect is among the most documented and replicated findings in the recognition memory literature (Gronlund & Elam, 1994). Yet, while many experiments have been conducted which have identified a significant list length effect in recognition (e.g. Bowles & Glanzer, 1983; Cary & Reder, 2003; Murnane & Shiffrin, 1991; Ohrt & Gronlund, 1999; Strong, 1912; Underwood, 1978), a number of published studies have failed to identify the effect (e.g. Dennis & Humphreys, 2001; Dennis, Lee & Kinnell, 2008; Jang & Huber, 2008; Murnane & Shiffrin, 1991; Schulman, 1974). Dennis and Humphreys (2001) argued that studies that had identified a significant effect of list length on recognition performance had done so because of a failure to control for four potentially confounding variables; retention interval, attention, displaced rehearsal and contextual reinstatement. The possible confounding effects of retention interval and displaced rehearsal are already well established in the literature and most studies employ some controls for these. Dennis et al. (2008) also found that while the role of contextual reinstatement had a pronounced influence on the detection of the list length effect it did not appear to be the most influential of the potential confounds. Thus, a major aim of the present thesis was to investigate the role of attention in the identification of the list length effect. Experiment 1 (N=160) involved two manipulations of attention. The first was to use either a retroactive or proactive design, with differential lapses of attention likely to be more pronounced in the latter. Second, in one condition participants were asked to perform a pleasantness rating task at study, a common technique to encourage participants to attend to the stimulus, while in the other condition they were asked to simply read the words. Results indicated that attention modulates the list length effect and that it is the retroactive versus proactive distinction which is most important as a significant effect of list length was found only when the proactive design was used. The encoding task had little effect. The design of Experiment 2 (N=80) was based on Cary and Reder's (2003) Experiment 3 which itself was a partial replication of Dennis and Humphreys' (2001) experiments. Cary and Reder introduced the Remember-Know (RK) task into the test list in their experiments and identified a significant effect of list length in the presence of controls for the four confounds where Dennis and Humphreys had not. The RK task is thought to index the relative contributions of familiarity and recollection in the recognition process (Gardiner, 1988). To the extent that the RK task encourages a recall-like process (see Clark, 1999; Diana, Reder, Arndt & Park, 2006) it may influence the results regarding the list length effect, in that the effect is widely accepted to occur in recall. Experiment 2 compared recognition memory with or without RK instructions. One condition involved the standard yes/no recognition paradigm, while the other made us of the RK task following all “yes” responses. Controls for the four potential confounds of Dennis and Humphreys were implemented. No significant effect of list length was identified in the accuracy data of either condition, however there was a small but significant effect on median response latency for correct responses in the RK task condition. The results of Experiments 1 and 2 suggest that the effect of list length on recognition performance is negligible and nonsignificant when controls for the four potential confounds of list length are in place. However, both of these experiments, and almost all previous experiments investigating the list length effect, used words as the stimuli. The remaining four experiments in the present thesis (N=40 in each) sought to investigate the list length effect using stimuli other than words in an attempt to identify the boundary conditions of the effect. Each of these experiments followed the same basic method as Experiments 1 and 2. Four different kinds of stimuli were investigated, word pairs, images of novel faces, fractals and photographs. Results indicated a nonsignificant effect of list length for word pairs and photographs, however, there was a significant list length effect when faces (in the accuracy data) and fractals (in the response latency data) were used as the stimuli. However, all of the experiments in the present thesis used a within subjects manipulation of list length in order to maximise experimental power. This design may be an additional confound of the list length effect. The nature of the within subjects design means that by the end of the second study list, all participants will have studied the same number of items, thereby potentially removing any list length manipulation from the experiment. In addition, participants who studied the long list first may be more likely to be affected by lapses in attention than participants who began with the short list with this, rather than interference, the potential cause of any list length differences. In order to investigate this potential confounding, the results from all experiments of the present thesis were re-analysed using a between subjects analysis based on only the first list studied by each participant. The qualitative conclusions drawn from the majority of conditions remained unchanged. The between subjects analysis generally revealed larger effect sizes than did the within subjects analysis, although with the exception of the proactive conditions, these effects can be considered negligible to small at most. The pattern of results across the six experiments of the present thesis are problematic for existing mathematical models of recognition memory. While context noise models are able to account for negligible and nonsignificant effects of list length when words, word pairs and photographs are used as the stimuli, they are unable to predict a slightly larger and significant list length effect when the stimuli are novel faces or fractals. Conversely, while item noise models are able to account for a significant list length effect for faces and fractals, they are unable to predict a nonsignificant list length effect for words and word pairs. The results question whether either item or context noise can be taken as the sole source of interference in recognition memory. Rather, a combination of interference from different sources may be at work, with the precise nature of this combination dependent on the nature of the stimuli involved. However, it is important to note that these models must be able to all but eliminate interference from other items under certain conditions to obtain the negligible list length effect findings reported here. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1474563 / Thesis (Ph.D.) -- University of Adelaide, School of Psychology, 2009
|
7 |
SAC Attack: Assessing the Role of Recollection in the Mirror EffectPazzaglia, Angela M. 01 September 2012 (has links)
Low-frequency (LF) words have higher hit rates (HRs) and lower false alarm rates (FARs) than high-frequency (HF) words in recognition memory, a phenomenon termed the mirror effect by Glanzer and Adams (1985). The primary mechanism for producing the mirror effect varies substantially across models of recognition memory, with some models localizing the effects during encoding and others during retrieval. The current experiments contrast two retrieval-stage models, the Source of Activation Confusion (SAC; Reder, Nhouyvanisvong, Schunn, Ayers, Angstadt, & Hiraki, 2000) model and the unequal variance signal detection theory (UVSDT) criterion shift model (e.g., DeCarlo, 2002). The SAC model proposes that two distinct processes underlie the HR and FAR effects, with a familiarity process driving the FAR effect and a recollective process driving the HR effect. The UVSDT criterion shift model assumes that subjects use different criteria when making recognition judgments for HF and LF words, with this single process driving both the HR and FAR effects. Experiment 1 incorporated divided attention and speeded responding manipulations designed to remove the contribution of recollection in the SAC model, thereby eliminating the LF HR advantage. Experiment 2 manipulated the salience of the frequency classes, as the UVSDT criterion shift model requires that subjects are aware of the distinct frequency classes in order to shift their criteria. Across both experiments, model simulations and direct fits of the SAC model demonstrated systematic errors in prediction. While the UVSDT model struggled in fits to Experiment 1 data, the model provided acceptable fits to Experiment 2 data and accurately predicted the general pattern of effects in all cases. Furthermore, state-trace analyses provided compelling evidence in favor of single-process rather than dual-process models of recognition memory, casting serious doubt on the validity of the dual-process SAC model. Finally, the current experiments highlight the importance of obtaining model-based estimates of sensitivity and bias across frequency classes, as the standard practice of conducting direct comparisons of HRs and FARs for HF and LF words confounds bias and sensitivity differences.
|
8 |
Kahn process networks as concurrent data structures : lock freedom, parallelism, relaxation in shared memory / Les réseaux de processus de Kahn : progrès non bloquant, parallélisme, relâchement en mémoire partagéeLê, Nhat Minh 09 December 2016 (has links)
La thèse porte sur les réseaux de Kahn, un modèle de concurrence simple et expressif proposé par Gilles Kahn dans les années 70, et leur implémentation sur des architectures multi-coeurs modernes, à mémoire partagée. Dans un réseau de Kahn, le programmeur décrit un programme parallèle comme un ensemble de processus et de canaux communicants, reliant chacun exactement un processus producteur à un consommateur. Nous nous concentrons ici sur les aspects algorithmiques et les choix de conception liés à l'implémentation, avec deux points clefs : les garanties non bloquantes et la mémoire relâchée. Le développement d'algorithmes non bloquants efficaces s'inscrit dans une optique de gestion des ressources et de garantie de performance sur les plateformes à ordonnancement irrégulier, telles que les machines virtuelles ou les GPU. Un travail complémentaire sur les modèles de mémoire relâchée vient compléter cette approche théorique par un prolongement plus pratique dans le monde des architectures à mémoire partagée contemporaines. Nous présentons un nouvel algorithme non bloquant pour l'interprétation de réseaux de Kahn. Celui-ci est parallèle sur les accès disjoints : il permet à plusieurs processeursde travailler simultanément sur un même réseau de Kahn partagé, tout en exploitant le parallélisme entre processus indépendants. Il offre dans le même temps des garanties de progrès non bloquant : en mémoire bornée et en présence de retards sur les processeurs. L'ensemble forme, à notre connaissance, le premier système complètement non bloquant de cette envergure : techniques classiques de programmation non bloquante et contributions spécifiques aux réseaux de Kahn. Nous discutons également d'une variante bloquante destinée au calcul haute performance, avec des résultats expérimentaux encourageants. / In this thesis, we are interested in Kahn process networks, a simple yet expressive model of concurrency, and its parallel implementation on modern shared-memory architectures. Kahn process networks expose concurrency to the programmer through an arrangement of sequential processes and single-producer single-consumer channels. The focus is on the implementation aspects. Of particular importance to our study are two parameters: lock freedom and relaxed memory. The development of fast andefficient lock-free algorithms ties into concerns of controlled resource consumption and reliable performance on current and future platforms with unfair or skewed scheduling such as virtual machines and GPUs. Our work with relaxed memory models complements this more theoretical approach by offering a window into realistic sharedmemory architectures. We present a new lock-free algorithm for a Kahn process network interpreter. It is disjoint-access parallel: we allow multiple threads to work on the same shared Kahn process network, fully utilizing the parallelism exhibited by independent processes. It is nonblockingin that it guarantees global progress in bounded memory, even in the presence of (possibly infinite) delays affecting the executing threads. To our knowledge, it is the first lock-free system of this size, and integrates various well-known non-blocking techniques and concepts (e.g., safe memory reclamation, multi-word updates, assistance) with ideas and optimizations specific to the Kahn network setting. We also discuss a variant of the algorithm, which is blocking and targeted at high-performance computing, with encouraging experimental results.
|
9 |
Rethinking Buffer Operations in a Dual-Store FrameworkLehman, Melissa 01 January 2011 (has links)
Atkinson and Shiffrin's (1968) dual-store model of memory includes a structural memory store along with control processes conceptualized as a rehearsal buffer. I present a variant of Atkinson and Shiffrin's buffer model within a global memory framework that accounts for findings previously thought to be difficult for it to explain. This model assumes a limited capacity buffer where information is stored about items, along with information about associations between items and between items and the context in which they are studied. The strength of association between items and context is limited by the number of items simultaneously occupying the buffer. New findings that directly test the buffer assumptions are presented, including serial position effects, and conditional and first recall probabilities in immediate and delayed free recall, in a continuous distractor paradigm, and in experiments using list length manipulations of single item and paired item study lists. Overall, the model's predictions are supported by the data from these experiments, suggesting that control processes, conceptualized as a rehearsal buffer, are a necessary component of memory models.
|
10 |
A proposed memory consistency model for ChapelSrinivasa Murthy, Karthik, 1983- 20 December 2010 (has links)
A memory consistency model for a language defines the order of memory operations performed by each thread in a parallel execution. Such a constraint is necessary to prevent the compiler and hardware optimizations from reordering certain memory operations, since such reordering might lead to unintuitive results. In this thesis, we propose a memory consistency model for Chapel, a parallel programming language from Cray Inc. Our memory model for Chapel is based on the idea of multiresolution and aims to provide a migration path from a program that is easy to reason about to a program that has better performance efficiency. Our model allows a programmer to write a parallel program with sequential consistency semantics, and then migrate to a performance-oriented version by increasingly changing different parts of the program to follow relaxed semantics. Sequential semantics helps in reasoning about the correctness of the parallel program and is provided by the strict sequential consistency model in our proposed memory model. The performance-oriented versions can be obtained either by using the compiler sequential consistency model, which maintains the sequential semantics, or by the relaxed consistency model, which maintains consistency only at global synchronization points. Our proposed memory model for Chapel thus combines strict sequential consistency model, compiler sequential consistency model and relaxed consistency model. We analyze the performance of the three consistency models by implementing three applications: Barnes-Hut, FFT and Random-Access in Chapel, and the hybrid model of MPI and Pthread. We conclude the following: The strict sequential consistency model is the best model to determine algorithmic errors in the applications, though it leads to the worst performance; the relaxed consistency model gives the best performance among the three models, but relies on the programmer to enforce synchronization correctly; the performance of the compiler sequential model depends on accuracy of the dependence analysis performed by the compiler; the relative performance of the consistency models across Chapel and the hybrid programming model of MPI and Pthread are the same. This shows that our model is not tightly bound to Chapel and can be applied on other programming models/languages. / text
|
Page generated in 0.0527 seconds