Spelling suggestions: "subject:"concurrent programming"" "subject:"concurrent erogramming""
1 |
Combinators and bisimulation proofs for restartable systemsPrasad, K. V. S. January 1987 (has links)
No description available.
|
2 |
Implementations of process synchronisation, and their analysisMitchell, Kevin Nicholas Peter January 1985 (has links)
No description available.
|
3 |
A Framework for Testing Concurrent ProgramsRicken, Mathias January 2011 (has links)
This study proposes a new framework that can effectively apply unit testing to concurrent programs, which are difficult to develop and debug. Test-driven development, a practice enabling developers to detect bugs early by incorporating unit testing into the development process, has become wide-spread, but it has only been effective for programs with a single thread of control. The order of operations in different threads is essentially non-deterministic, making it more complicated to reason about program properties in concurrent programs than in single-threaded programs. Because hardware, operating systems, and compiler optimizations influence the order in which operations in different threads are executed, debugging is problematic since a problem often cannot be reproduced on other machines. Multicore processors, which have replaced older single-core designs, have exacerbated these problems because they demand the use of concurrency if programs are to benefit from new processors.
The existing tools for unit testing programs are either flawed or too costly. JUnit, for instance, assumes that programs are single-threaded and therefore does not work for concurrent programs; ConTest and rstest predate the revised Java memory model and make incorrect assumptions about the operations that affect synchronization. Approaches such as model checking or comprehensive schedule-
based execution are too costly to be used frequently. All of these problems prevent software developers from adopting the current tools on a large scale.
The proposed framework (i) improves JUnit to recognize errors in all threads, a necessary development without which all other improvements are futile, (ii) places some restrictions on the programs to facilitate automatic testing, (iii) provides tools that reduce programmer mistakes, and (iv) re-runs the unit tests with randomized schedules to simulate the execution under different conditions and on different ma- chines, increasing the probability that errors are detected.
The improvements and restrictions, shown not to seriously impede programmers, reliably detect problems that the original JUnit missed. The execution with randomized schedules reveals problems that rarely occur under normal conditions.
With an effective testing tool for concurrent programs, developers can test pro- grams more reliably and decrease the number of errors in spite of the proliferation of concurrency demanded by modern processors.
|
4 |
Ταυτόχρονα περιβάλλοντα προγραμματισμού : διδακτικές προσεγγίσειςΝικολός, Δημήτριος 06 September 2010 (has links)
Η γλώσσα προγραμματισμού Scratch είναι ιδανική για την εισαγωγή στον προγραμματισμό. Η νέα αυτή γλώσσα ανήκει στο παράδειγμα του ταυτόχρονου προγραμματισμού.
Στην εργασία αυτή περιγράφεται η σχεδίαση και η αξιολόγηση ενός εξαμηνιαίου μαθήματος για την εκμάθηση της Scratch με σκοπό αφενός να μελετηθεί ο τρόπος με τον οποίο οι αρχάριοι προγραμματιστές προσεγγίζουν το θέμα του συγχρονισμού και αφετέρου να διατυπωθούν προτάσεις για τη βελτίωση του μαθήματος. Η μεθοδολογία που εφαρμόστηκε είναι βασισμένη σε σχεδιασμό ερευνητική μεθοδολογία. / Scratch programming language is ideal for introductory programming courses. This new language follows the concurrent programming paradigm.
Ιn the thesis the design and evaluation of a course for learning programming with Scratch is described. The approach that beginners programmers use for the necessary synchronization is studied and a new proposal for the laboratory course is presented. The design based research methodology is followed.
|
5 |
A Framework for Testing Concurrent ProgramsJanuary 2011 (has links)
This study proposes a new framework that can effectively apply unit testing to concurrent programs, which are difficult to develop and debug. Test-driven development, a practice enabling developers to detect bugs early by incorporating unit testing into the development process, has become wide-spread, but it has only been effective for programs with a single thread of control. The order of operations in different threads is essentially non-deterministic, making it more complicated to reason about program properties in concurrent programs than in single-threaded programs. Because hardware, operating systems, and compiler optimizations influence the order in which operations in different threads are executed, debugging is problematic since a problem often cannot be reproduced on other machines. Multi-core processors, which have replaced older single-core designs, have exacerbated these problems because they demand the use of concurrency if programs are to benefit from new processors. The existing tools for unit testing programs are either flawed or too costly. JUnit , for instance, assumes that programs are single-threaded and therefore does not work for concurrent programs; ConTest and rstest predate the revised Java memory model and make incorrect assumptions about the operations that affect synchronization. Approaches such as model checking or comprehensive schedule-based execution are too costly to be used frequently. All of these problems prevent software developers from adopting the current tools on a large scale. The proposed framework (i) improves JUnit to recognize errors in all threads, a necessary development without which all other improvements are futile, (ii) places some restrictions on the programs to facilitate automatic testing, (iii) provides tools that reduce programmer mistakes, and (iv) re-runs the unit tests with randomized schedules to simulate the execution under different conditions and on different machines, increasing the probability that errors are detected. The improvements and restrictions, shown not to seriously impede programmers, reliably detect problems that the original JUnit missed. The execution with randomized schedules reveals problems that rarely occur under normal conditions. With an effective testing tool for concurrent programs, developers can test programs more reliably and decrease the number of errors in spite of the proliferation of concurrency demanded by modern processors.
|
6 |
Performance Optimizations for Software Transactional MemoryJanuary 2011 (has links)
The transition from single-core processors to multi-core processors demands a change from sequential programming to concurrent programming for mainstream programmers. However, concurrent programming has long been widely recognized as being notoriously difficult. A major reason for its difficulty is that existing concurrent programming constructs provide low-level programming abstractions. Using these constructs forces programmers to consider many low level details. Locks, the dominant programming construct for mutual exclusion, suffer several well known problems, such as deadlock, priority inversion, and convoying, and are directly related to the difficulty of concurrent programming. The alternative to locks, i.e. non-blocking programming, not only is extremely error-prone, but also does not produce consistently good performance. Better programming constructs are critical to reduce the complexity of concurrent programming, increase productivity, and expose the computing power in multi-core processors. Transactional memory has emerged recently as one promising programming construct for supporting atomic operations on shared data. By eliminating the need to consider a huge number of possible interactions among concurrent transactions, Transactional memory greatly reduces the complexity of concurrent programming and vastly improves programming productivity. Software transactional memory systems implement a transactional memory abstraction in software. Unfortunately, existing designs of Software Transactional Memory systems incur significant performance overhead that could potentially prevent it from being widely used. Reducing STM's overhead will be critical for mainstream programmers to improve productivity while not suffering performance degradation. My thesis is that the performance of STM can be significantly improved by intelligently designing validation and commit protocols, by designing the time base, and by incorporating application-specific knowledge. I present four novel techniques for improving performance of STM systems to support my thesis. First, I propose a time-based STM system based on a runtime tuning strategy that is able to deliver performance equal to or better than existing strategies. Second, I present several novel commit phase designs and evaluate their performance. Then I propose a new STM programming interface extension that enables transaction optimizations using fast shared memory reads while maintaining transaction composability. Next, I present a distributed time base design that outperforms existing time base designs for certain types of STM applications. Finally, I propose a novel programming construct to support multi-place isolation. Experimental results show the techniques presented here can significantly improve the STM performance. We expect these techniques to help STM be accepted by more programmers.
|
7 |
Parallelism in Node.js applications : Data flow analysis of concurrent scriptsJansson, Linda January 2017 (has links)
To fully utilize multicore processors in Node.js applications, the applications must be programmed as multiple processes. Parallel execution can increase the throughput of data and hence lower data buffering for inter-process communica- tion. Node.js’s asynchronous programming model and interface to the operating system make for convenient tools that are well suited for multiprocess program- ming. However, the run-time behavior of asynchronous processes results in non-deterministic processor load and data flow. That means the performance gain from increasing concurrency depends on both the application’s run-time state and the hardware’s capacity for parallel execution. The objective of this thesis work is to explore the effects of increasing parallel- ism in Node.js applications by measuring the differences in the amount of data buffering when distributed processes run of a varying number of cores with a fixed rate of asynchronously arriving data. The goal is to simulate and examine the run-time behavior of three basic multiprocess Node.js application architec- tures in order to discuss and evaluate software parallelism techniques. The three architectures are: pipelined nodes for temporally dependent processing, a vector of nodes for data parallel processing, and a grid of nodes for one-to-many branched processing. To simulate and visualize the run-time behavior, a simulation environment us- ing multiple Node.js processes is created. The simulation is agent-based, where the agent is an abstraction for a specific data flow within the application. The simulation models and visualizes all of the data flows within a distributed appli- cation where processes communicate asynchronously via messages through sockets. The results show that performance can increase when distributing Node.js ap- plications across multiple processes running in parallel on multicore hardware. There are however diminishing returns as the number of active processes equal or exceed the number of cores. A good rule of thumb seem to be to distribute the decoupled logic across as many processes as there are cores. The interaction between asynchronous processes is on the whole made very simple with Node.js. Although running multiple instances of Node.js requires more memory, the distributed architecture has the potential to increase performance by nearly as many times as the number of cores in the processor.
|
8 |
A refactoring approach to improve energy consumption of parallel software systemsPINTO, Gustavo Henrique Lima 24 February 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-04-06T13:27:55Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
versao_biblioteca.pdf: 3051240 bytes, checksum: ac1a91e08d64c78a372cb0e151bcb7c7 (MD5) / Made available in DSpace on 2016-04-06T13:27:55Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
versao_biblioteca.pdf: 3051240 bytes, checksum: ac1a91e08d64c78a372cb0e151bcb7c7 (MD5)
Previous issue date: 2015-02-24 / CAPEs / Empowering application programmers to make energy-aware decisions is a critical dimension in improving energy efficiency of computer systems. Despite the growing interest in designing software development processes, frameworks, and programming models to facilitate application-level energy management, little is known on how to design application-level energy-efficient solutions for concurrent software running on parallel architectures. This is unfortunate for at least two reasons: (1) thanks to the proliferation of multicore CPUs, concurrent programming is a standard practice in modern software engineering; (2) a CPU with more cores (say 32) often consumes more power than one with fewer cores (say 1 or 2). However, application developers still do not understand how their code modifications impact energy consumption in a parallel system. Analyzing STACKOVERFLOW showed evidence that this is a real problem; Even though the interest in energy consumption issues is increasing over the years, developers still hold misconceptions and assumptions that are not always true. This lack of knowledge is primarily due to a lack of appropriate tools to measure/identify/refactor energy consumption hotspots. This thesis begins to bridge the chasm of the first problem — the lack of knowledge — by presenting an extensive experimental space exploration over two concurrent programming building blocks: (1) thread-safe collections and (2) thread management constructs. Through a list of findings that are not always obvious, we illuminate the relationship between the choices and settings of design decisions and energy consumption of parallel systems. This thesis then starts to bridge the gap of the second problem — the lack of tools. Lessons learned in our previous studies showed that ForkJoin tasks often operate on an indexable data structure, with subtasks operating only on part of this data structure. One naïve solution is to copy part of the data structure and use it in the next computation. In a recursive framework such as ForkJoin, given an array-based representation, each recursive call will create n new arrays, where n is the width of forking. To address this, we derive a refactoring that, instead of copy part of the data structure, it shares it, allowing subtasks to operate on contiguous partitions of the data structure. We manually applied this refactoring into 15 open source projects. Our refactoring succeed in saving energy for each one of them (12% average saving). We sent the refactored versions to the project owner and, during a timeframe of 40 days, 7 out of 9 projects that replied to our patches have already accepted and merged them. Discussions during the merge process revealed that developers were not aware of this optimization. We then implemented this refactoring as an Eclipse plug-in so that other developers can (1) detect uses of copy where it would be beneficial to use sharing and (2) refactor the code in an automated way. / Fornecer meios para que desenvolvedores de software tomem decisões energeticamente eficientes é uma dimensão crítica para se melhorar o consumo de energia de sistemas computacionais. Apesar do crescente interesse em processos de desenvolvimento de software, arcabouços, e modelos de programação de forma a facilitar o gerenciamento de energia no nível da aplicação, pouco se sabe sobre como arquitetar sistemas concorrentes energéticamente eficientes que rodem em arquiteturas paralelas. Isso é inoportuno por pelo menos duas razões: (1) graças a proliferação de CPUs multicore, programação concorrente se tornou uma prática padrão na engenharia de software moderna; (2) uma CPU com várias unidades de processamento (por exemplo, 32) geralmente dissipa mais potência do que uma com um número menor (por exemplo, 1 ou 2). No entanto, desenvolvedores ainda não entendem como suas modificações de código impactam no consumo de energia de uma aplicação paralela. Uma análise do StackOverflow mostrou evidências que esse é um problema real; mesmo embora exista um crescente interesse em questões relacionadas ao consumo de energia, desenvolvedores ainda cometem equívocos e mantêm suposições que não são sempre verdadeiras. Essa falta de conhecimento é primariamente devido a falta de ferramentas apropriadas para medir/identificar/refatorar hotspots de consumo de energia. Essa tese então começa a pavimentar o abismo do primeiro problema — a falta de conhecimento — através de uma extensa exploração experimental de dois dos pilares fundamentais da programação concorrente: (1) coleções thread-safe e (2) construções para o gerenciamento de threads. Através de uma lista de achados que não são sempre óbvios, esta tese ilumina o relacionamento entre escolhas de design de código paralelo com seu consumo de energia. Esta tese começa então a pavimentar a lacuna do segundo problema — a falta de ferramentas. Lições aprendidas em um dos estudos anteriores mostraram que várias tarefas do arcabouço ForkJoin operam em estrutura de dados indexáveis, com sub-tarefas operando somente em parte dessa estrutura de dados. Uma solução ingênua é de copiar parta da estrutura de dados e utiliza-la na computação sub-sequente. Em um arcabouço recursivo como o ForkJoin, dado uma representação baseada em arrays, cada chamada recursiva criará n novos arrays, onde n é a profundidade do fork. Como solução, esta tese apresenta uma refatoração que, ao invés de copiar parte da estrutura de dados, ela compartilha-a, possibilitando que sub-tarefas operem em partições contíguas da estrutura de dados. Essa refatoração foi avaliada em 15 projetos de código aberto, a qual foi capaz de economizar energia em todos os casos (média de 12% de economia). A versão refatorada foi enviada aos mantenedores do projeto original e, durante um período de 40 dias, 7 dos 9 mantenedores que responderam aos patches enviados já haviam aceitado-os e integrado-os. Discussões durante o processo de integração revelaram que desenvolvedores não estão cientes desta otimização. Esta tese então implementou essa refatoração como um plug-in da IDE Eclipse de forma que outros desenvolvedores possam (1) detectar usos de cópia em cenários o quais seriam beneficiais o uso do modelo de compartilhamento and (2) refatorar o código de forma automática.
|
9 |
Avaliação do custo e efetividade dos critérios de teste estruturais no contexto de programas concorrentes com memória compartilhada / Evaluation of the cost. effectiveness and strength of structural testing criteria in the concurrent programs context with shared memoryMelo, Silvana Morita 11 October 2012 (has links)
O teste de programas concorrentes e uma atividade desaadora, devido a fatores que não estão presentes em programas sequenciais, como comunicação, sincronização e não determinismo. Algumas técnicas de teste têm sido propostas para o contexto de programação concorrente, mas raramente sua aplicabilidade e avaliada por estudos teóricos ou experimentais. Este trabalho contribui nesse sentido, propondo e conduzindo um estudo experimental para avaliar o custo, eficácia e aspecto complementar dos critérios de teste estruturais para programas concorrentes no contexto de memória compartilhada, implementados usando o padrão PThreads (Posix Threads). A ferramenta de teste ValiPThread e usada para auxiliar a condução do experimento. Os programas usados no experimento foram selecionados de benchmarks, como o Inspect, Helgrind e Rungta. Esses benchmarks são comumente usados no estudo de técnicas de teste para programas concorrentes. Programas que resolvem problemas clássicos da programação concorrente também foram incluídos no estudo. Com base nos resultados obtidos foi definida uma estratégia de aplicação, considerando aspectos de custo e eficácia dos critérios de teste. Além disso, todo o material utilizado e gerado durante o experimento foi reunido em um pacote de laboratório, a fim de contribuir com a comunidade de pesquisa, possibilitando replicações e comparações desses critérios com outras técnicas de teste no contexto de programas concorrentes / Concurrent program testing is a challenging activity due to the communication, synchronization and nondeterminism of this application domain. Despite that, some testing techniques for concurrent programs have been proposed, but their applicability is rarely evaluated by theoretical or experimental studies. This work contributes in this direction proposing and conducting an experimental study to evaluate the cost, effectiveness and strength of structural testing criteria for multithreaded programs, implemented using the Pthreads standard (POSIX Threads). The testing tool ValiPThread is used to support the conduction of the experiment. The programs used in this experiment were selected from classical benchmarks, such as Inspect, Helgring and Rungta. These benchmarks are commonly used to study testing techniques for concurrent programs. We also include programs that solve concurrent classical problems. Based on the obtained results, we defined an application testing strategy, considering cost and effectiveness aspects of the testing criteria. Furthermore, all material used and generated during the experiment was incorporated in a lab package, in order to contribute with further research studies making possible replications and comparisons of these testing criteria with other testing techniques in context of concurrent programs
|
10 |
Teste de mutação aplicado a programas concorrentes em MPI / Mutation testing applied to concurrent programs in MPISilva, Rodolfo Adamshuk 13 March 2013 (has links)
A Programação Concorrente tornou-se uma forma popular de desenvolvimento de software. Este paradigma de desenvolvimento e essencial para construir aplicações com o intuito de reduzir o tempo computacional em muitos domínios como, por exemplo, previsão tempo, processamento de imagem, entre outros. Estes programas têm novas características como a comunicação, a sincronização e o não determinismo, que precisam ser considerados durante a atividade de teste. O teste de software e uma atividade que busca garantir a qualidade por meio da identificação de falhas no produto. O Teste de Mutação e um critério de teste que se baseia nos enganos que podem ser cometidos pelos desenvolvedores de software. Porém, o teste de mutação não pode ser aplicado em programas concorrentes da mesma maneira como e aplicado em programas sequenciais por causa das particularidades presentes nos programas concorrentes. Um problema de aplicar o teste de mutação nesse contexto e o comportamento não determinístico das aplicações. Este trabalho investiga a definição do teste de mutação para programas concorrentes implementados em MPI (Message Passing Interface), os quais realizam comunicação e sincronização por meio de troca de mensagens. Para isso, defeitos típicos nesse domínio foram considerados, buscando modelar operadores de mutação para tratar os aspectos de comunicação e sincronização dessas aplicações. Também foi proposto um procedimento para dar suporte a análise comportamental dos mutantes. As idéias foram implementadas em uma ferramenta de teste chamada ValiMPI Mut / Concurrent programming became a popular paradigm for software development. This paradigm is essential to build applications which aim to reduce the computational time in many areas, such as, weather forecast, image processing, among others. These programs present new features such as communication, synchronization, and nondeterminism, which must be considered during the testing activity. Software testing is an activity that looks to ensure quality by identifying faults in the product. Mutation Testing is a criterion based on the most common mistakes that might be made by software developers. However, the mutation testing cannot be applied in concurrent programs the same way as applied in sequential ones due to the peculiarities present in concurrent programs. One of the problems in applying mutation testing in this context is the non-deterministic behavior. This work investigates the definition of mutation testing for concurrent programs implemented in MPI (Message Passing Interface), which perform communication and synchronization using message passing. For this, typical faults in this area were considered in order to model mutation operators addressing the aspects of communication and synchronization of these applications. Also, we are proposing a new procedure to support the behavioral analysis of the mutants. The ideas were implemented in a testing tool called ValiMPI Mut
|
Page generated in 0.1087 seconds