Spelling suggestions: "subject:"concurrency."" "subject:"oncurrency.""
101 |
Combinatorial Slice Theoryde Oliveira Oliveira, Mateus January 2013 (has links)
Slices are digraphs that can be composed together to form larger digraphs.In this thesis we introduce the foundations of a theory whose aim is to provide ways of defining and manipulating infinite families of combinatorial objects such as graphs, partial orders, logical equations etc. We give special attentionto objects that can be represented as sequences of slices. We have successfully applied our theory to obtain novel results in three fields: concurrency theory,combinatorics and logic. Some notable results are: Concurrency Theory: We prove that inclusion and emptiness of intersection of the causalbehavior of bounded Petri nets are decidable. These problems had been open for almost two decades. We introduce an algorithm to transitively reduce infinite familiesof DAGs. This algorithm allows us to operate with partial order languages defined via distinct formalisms, such as, Mazurkiewicztrace languages and message sequence chart languages. Combinatorics: For each constant z ∈ N, we define the notion of z-topological or-der for digraphs, and use it as a point of connection between the monadic second order logic of graphs and directed width measures, such as directed path-width and cycle-rank. Through this connection we establish the polynomial time solvability of a large numberof natural counting problems on digraphs admitting z-topological orderings. Logic: We introduce an ordered version of equational logic. We show thatthe validity problem for this logic is fixed parameter tractable withrespect to the depth of the proof DAG, and solvable in polynomial time with respect to several notions of width of the equations being proved. In this way we establish the polynomial time provability of equations that can be out of reach of techniques based on completion and heuristic search. / <p>QC 20131120</p>
|
102 |
Attentiveness: Reactivity at ScaleHartman, Gregory S. 01 December 2010 (has links)
Clients of reactive systems often change their priorities. For example, a human user of an email viewer may attempt to display a message while a large attachment is downloading. To the user, an email viewer that delayed display of the message would exhibit a failure similar to priority inversion in real-time systems.
We propose a new quality attribute, attentiveness, that provides a unified way to model the forms of redirection offered by application-level reactive systems to accommodate the changing priorities of their clients, which may be either humans or systems components. Modeling attentiveness as a quality attribute provides system designers with a single conceptual framework for policy and architectural decisions to address trade-offs among criteria such as responsiveness, overall performance, behavioral predictability, and state consistency.
|
103 |
Nested pessimistic transactions for both atomicity and synchronization in concurrent softwareChammah, Tarek January 2011 (has links)
Existing atomic section interface proposals, thus far, have tended to only isolate transactions from each other. Less considered is the coordination of threads performing transactions with respect to one another. Synchronization of nested sections is typically relegated to outside of and among the top-level flattened sections. However existing models do not permit the composition of even simple synchronization constructs such as barriers. The proposed model integrates synchronization as a first-class construct in a truly nested atomic block implementation. The implementation is evaluated on quantitative benchmarks, with qualitative examples of the atomic section interface???s expressive power compared with conventional transactional memory implementations.
|
104 |
Improving Performance Of Network Intrusion Detection Systems Through Concurrent MechanismsAtakan, Mustafa 01 January 2004 (has links) (PDF)
As the bandwidth of present networks gets larger than the past, the demand of Network Intrusion Detection Systems (NIDS) that function in real time becomes the major requirement for high-speed networks. If these systems are not fast enough to process all network traffic passing, some malicious security violations may take role using this drawback. In order to make that kind of applications schedulable, some concurrency mechanism is introduced to the general flowchart of their algorithm. The principal aim is to fully utilize each resource of the platform and overlap the independent parts of the applications. In the sense of this context, a generic multi-threaded infrastructure is designed and proposed. The concurrency metrics of the new system is analyzed and compared with the original ones.
|
105 |
Gene Reordering And Concurrency In Genetic AlgorithmsSehitoglu, Onur Tolga 01 August 2002 (has links) (PDF)
This study first introduces an order-free chromosome encoding to enhance
the performance of genetic algorithms by learning the linkage of building
blocks in non-binary encodings. The method introduces a measure called
affinity which is based on the statistical properties of gene valuations
in the population. It uses the affinity values of the local and global
gene pairs to construct a global permutation with tight building block
positioning. Method is tested and experimental results are shown for a
group of deceptive and real life test problems.
Then, study proposes a gene level concurrency model where each gene
position is implemented on a different process. This combines the
advantages of implicit parallelism and a chromosome structure free
approach. It also helps implementation of gene reordering method
introduced and probably other non-linear chromosome encodings.
|
106 |
Progress-based verification and derivation of concurrent programsBrijesh Dongol Unknown Date (has links)
Concurrent programs are known to be complicated because synchronisation is required amongst the processes in order to ensure safety (nothing bad ever happens) and progress (something good eventually happens). Due to possible interference from other processes, a straightforward rearrangement of statements within a process can lead to dramatic changes in the behaviour of a program, even if the behaviour of the process executing in isolation is unaltered. Verifying concurrent programs using informal arguments are usually unconvincing, which makes formal methods a necessity. However, formal proofs can be challenging due to the complexity of concurrent programs. Furthermore, safety and progress properties are proved using fundamentally different techniques. Within the literature, safety has been given considerably more attention than progress. One method of formally verifying a concurrent program is to develop the program, then perform a post-hoc verification using one of the many available frameworks. However, this approach tends to be optimistic because the developed program seldom satisfies its requirements. When a proof becomes difficult, it can be unclear whether the proof technique or the program itself is at fault. Furthermore, following any modifications to program code, a verification may need to be repeated from the beginning. An alternative approach is to develop a program using a verify-while-develop paradigm. Here, one starts with a simple program together with the safety and progress requirements that need to be established. Each derivation step consists of a verification, followed by introduction of new program code motivated using the proofs themselves. Because a program is developed side-by-side with its proof, the completed program satisfies the original requirements. Our point of departure for this thesis is the Feijen and van Gasteren method for deriving concurrent programs, which uses the logic of Owicki and Gries. Although Feijen and van Gasteren derive several concurrent programs, because the Owicki-Gries logic does not include a logic of progress, their derivations only consider safety properties formally. Progress is considered post-hoc to the derivation using informal arguments. Furthermore, rules on how programs may be modified have not been presented, i.e., a program may be arbitrarily modified and hence unspecified behaviours may be introduced. In this thesis, we develop a framework for developing concurrent programs in the verify-while-develop paradigm. Our framework incorporates linear temporal logic, LTL, and hence both safety and progress properties may be given full consideration. We examine foundational aspects of progress by formalising minimal progress, weak fairness and strong fairness, which allow scheduler assumptions to be described. We formally define progress terms such as individual progress, individual deadlock, liveness, etc (which are properties of blocking programs) and wait-, lock-, and obstruction-freedom (which are properties of non-blocking programs). Then, we explore the inter-relationships between the various terms under the different fairness assumptions. Because LTL is known to be difficult to work with directly, we incorporate the logic of Owicki-Gries (for proving safety) and the leads-to relation from UNITY (for proving progress) within our framework. Following the nomenclature of Feijen and van Gasteren, our techniques are kept calculational, which aids derivation. We prove soundness of our framework by proving theorems that relate our techniques to the LTL definitions. Furthermore, we introduce several methods for proving progress using a well-founded relation, which keeps proofs of progress scalable. During program derivation, in order to ensure unspecified behaviour is not introduced, it is also important to verify a refinement, i.e., show that every behaviour of the final (more complex) program is a possible behaviour of the abstract representation. To facilitate this, we introduce the concept of an enforced property, which is a property that the program code does not satisfy, but is required of the final program. Enforced properties may be any LTL formula, and hence may represent both safety and progress requirements. We formalise stepwise refinement of programs with enforced properties, so that code is introduced in a manner that satisfies the enforced properties, yet refinement of the original program is guaranteed. We present derivations of several concurrent programs from the literature.
|
107 |
Accelerating digital forensic searching through GPGPU parallel processing techniquesBayne, Ethan January 2017 (has links)
Background: String searching within a large corpus of data is a critical component of digital forensic (DF) analysis techniques such as file carving. The continuing increase in capacity of consumer storage devices requires similar improvements to the performance of string searching techniques employed by DF tools used to analyse forensic data. As string searching is a trivially-parallelisable problem, general purpose graphic processing unit (GPGPU) approaches are a natural fit. Currently, only some of the research in employing GPGPU programming has been transferred to the field of DF, of which, a closed-source GPGPU framework was used— Complete Unified Device Architecture (CUDA). Findings from these earlier studies have found that local storage devices from which forensic data are read present an insurmountable performance bottleneck. Aim: This research hypothesises that modern storage devices no longer present a performance bottleneck to the currently used processing techniques of the field, and proposes that an open-standards GPGPU framework solution – Open Computing Language (OpenCL) – would be better suited to accelerate file carving with wider compatibility across an array of modern GPGPU hardware. This research further hypothesises that a modern multi-string searching algorithm may be better adapted to fulfil the requirements of DF investigation. Methods: This research presents a review of existing research and tools used to perform file carving and acknowledges related work within the field. To test the hypothesis, parallel file carving software was created using C# and OpenCL, employing both a traditional string searching algorithm and a modern multi-string searching algorithm to conduct an analysis of forensic data. A set of case studies that demonstrate and evaluate potential benefits of adopting various methods in conducting string searching on forensic data are given. This research concludes with a final case study which evaluates the performance to perform file carving with the best-proposed string searching solution and compares the result with an existing file carving tool— Foremost. Results: The results demonstrated from the research establish that utilising the parallelised OpenCL and Parallel Failureless Aho-Corasick (PFAC) algorithm solution demonstrates significantly greater processing improvements from the use of a single, and multiple, GPUs on modern hardware. In comparison to CPU approaches, GPGPU processing models were observed to minimised the amount of time required to search for greater amounts of patterns. Results also showed that employing PFAC also delivers significant performance increases over the BM algorithm. The method employed to read data from storage devices was also seen to have a significant effect on the time required to perform string searching and file carving. Conclusions: Empirical testing shows that the proposed string searching method is believed to be more efficient than the widely-adopted Boyer-Moore algorithms when applied to string searching and performing file carving. The developed OpenCL GPGPU processing framework was found to be more efficient than CPU counterparts when searching for greater amounts of patterns within data. This research also refutes claims that file carving is solely limited by the performance of the storage device, and presents compelling evidence that performance is bound by the combination of the performance of the storage device and processing technique employed.
|
108 |
DTX: um mecanismo de controle de concorrência distribuído para dados XML / DTX: a mechanism of control of distributed concurrency for XML dataMoreira, Leonardo Oliveira January 2008 (has links)
MOREIRA, Leonardo Oliveira. DTX: um mecanismo de controle de concorrência distribuído para dados XML. 2008. 89 f. Dissertação (Mestrado em ciência da computação)- Universidade Federal do Ceará, Fortaleza-CE, 2008. / Submitted by Elineudson Ribeiro (elineudsonr@gmail.com) on 2016-07-11T17:50:32Z
No. of bitstreams: 1
2008_dis_lomoreira.pdf: 5597130 bytes, checksum: 9a74d13a49d1367fd13d8ff4c140fdc4 (MD5) / Approved for entry into archive by Rocilda Sales (rocilda@ufc.br) on 2016-07-18T16:01:30Z (GMT) No. of bitstreams: 1
2008_dis_lomoreira.pdf: 5597130 bytes, checksum: 9a74d13a49d1367fd13d8ff4c140fdc4 (MD5) / Made available in DSpace on 2016-07-18T16:01:30Z (GMT). No. of bitstreams: 1
2008_dis_lomoreira.pdf: 5597130 bytes, checksum: 9a74d13a49d1367fd13d8ff4c140fdc4 (MD5)
Previous issue date: 2008 / XML has become a widely used standard for data representation and exchange among Web applications. Consequently, a large volume of such data is distributed all over the Web and stored using several persistence methods. DBMSs provide concurrency control techniques to manage data. However, the structure of XML data makes it difficult to use these techniques. Projects are being proposed and they provide management of XML documents. Nevertheless, most of these projects do not provide efficient concurrency control mechanisms for distributed data. Some of them do provide support for distributed control of XML data, but use protocols that have limitations and offer low concurrency levels. In order to provide effective data management in distributed environments, we present DTX, a distributed concurrency control mechanism for XML data that takes into account its structural characteristics. DTX aims to provide effective management of XML data and contemplate properties such as isolation and consistency in transactions, using a multi-granular concurrency control protocol that increases parallelism among transactions and that has an optimized structure for data representation. DTX has a modular and flexible architecture, allowing for easy integration with any XML storage mechanisms. Moreover, DTX can be extended by adding new features to it. In order to evaluate DTX, several experiments were conducted comparing DTX as it is with a variation of DTX that uses a fine-grained protocol, in an attempt to simulate existing strategies in related work. Results confirm DTX’s effectiveness considering different aspects of distributed transactions on XML data, improving their performance, i.e., transaction execution time. / XML tornou-se um padrão amplamente utilizado na representação e troca de dados entre aplicações na Web. Com isso, grande volume desses dados está distribuído na Web e armazenado em diversos meios de persistência. SGBD relacionais que suportam XML fornecem técnicas de controle de concorrência para gerenciar esses dados. A estrutura dos dados XML, entretanto, dificulta a aplicação dessas técnicas. Trabalhos estão sendo propostos e fornecem gerenciamento de documentos XML. A maioria destes trabalhos, todavia, não oferecem um controle de concorrência eficiente para dados distribuídos. Outros trabalhos dão suporte ao controle distribuído de dados XML, mas estes possuem protocolos com baixo grau de concorrência e limitações. Para prover um gerenciamento eficaz em ambientes distribuídos, este trabalho apresenta o DTX, como mecanismo para o controle de concorrência distribuído para dados XML, que leva em consideração características estruturais destes dados. O DTX visa a um gerenciamento eficaz de dados XML e contemplar as propriedades de isolamento e consistência em transações, utilizando um protocolo para controle de concorrência multigranular que aumenta o paralelismo entre as transações e possui uma estrutura otimizada para representação dos dados. A solução proposta possui uma arquitetura modular e flexível, o que facilita sua integração com diferentes estruturas de armazenamento XML, além de poder ser estendido, adicionando novos recursos. Para validar o DTX, diversos testes foram feitos, comparando o DTX descrito neste trabalho com uma variação do DTX, utilizando um protocolo de maior granulosidade, visando a simular as estratégias dos trabalhos relacionados. Os resultados obtidos atestam a eficácia do DTX, considerando diferentes aspectos em transações distribuídas a dados XML, melhorando o desempenho, ou seja, o tempo de execução destas transações.
|
109 |
Verification of a Concurrent Garbage Collector / Vérification d'un glaneur de cellules concurrentZakowski, Yannick 19 December 2017 (has links)
Les compilateurs modernes constituent des programmes complexes, réalisant de nombreuses optimisations afin d'améliorer la performance du code généré. Du fait de cette complexité, des bugs y sont régulièrement détecté, conduisant à l'introduction de nouveau comportement dans le programme compilé. En réaction, il est aujourd'hui possible de prouver correct, dans des assistants de preuve tels que Coq, des compilateurs optimisants pour des langages tels que le C ou ML. Transporter un tel résultat pour des langages haut-niveau tels que Java est néanmoins encore hors de portée de l'état de l'art. Ceux-ci possèdent en effet deux caractéristiques essentielles: la concurrence et un environnement d'exécution particulièrement complexe.Nous proposons dans cette thèse de réduire la distance vers la conception d'un tel compilateur vérifié en nous concentrant plus spécifiquement sur la preuve de correction d'un glaneur de cellules concurrent performant. Ce composant de l'environnement d'exécution prend soin de collecter de manière automatique la mémoire, en lieu et place du programmeur. Afin de ne pas générer un ralentissement trop élevé à l'exécution, le glaneur de cellules doit être extrêmement performant. Plus spécifiquement, l'algorithme considéré est dit «au vol»: grâce à l'usage de concurrence très fine, il ne cause jamais d'attente active au sein d'un fil utilisateur. La preuve de correction établit par conséquent que malgré l'intrication complexe des fils utilisateurs et du collecteur, ce dernier ne collecte jamais une cellule encore accessible par les premiers. Nous présentons dans un premier temps l'algorithme considéré et sa formalisation en Coq dans une représentation intermédiaire conçue pour l'occasion. Nous introduisons le système de preuve que nous avons employé, une variante issue de la logique «Rely-Guarantee», et prouvons l'algorithme correct. Raisonner simultanément sur l'algorithme de collection et sur l'implantation des structures de données concurrentes manipulées générerait une complexité additionnelle indésirable. Nous considérons donc durant la preuve des opérations abstraites: elles ont lieu instantanément. Pour légitimer cette simplification, nous introduisons une méthode inspirée par les travaux de Vafeiadis et permettant la preuve de raffinement de structures de données concurrentes dites «linéarisables». Nous formalisons l'approche en Coq et la dotons de solides fondations sémantiques. Cette méthode est finalement instanciée sur la principale structure de données utilisée par le glaneur de cellules. / Modern compilers are complex programs, performing several heuristic-based optimisations. As such, and despite extensive testing, they may contain bugs leading to the introduction of new behaviours in the compiled program.To address this issue, we are nowadays able to prove correct, in proof assistants such as Coq, optimising compilers for languages such as C or ML. To date, a similar result for high-level languages such as Java nonetheless remain out of reach. Such languages indeed possess two essential characteristics: concurrency and a particularly complex runtime.This thesis aims at reducing the gap toward the implementation of such a verified compiler. To do so, we focus more specifically on a state-of-the-art concurrent garbage collector. This component of the runtime takes care of automatically reclaiming memory during the execution to remove this burden from the developer side. In order to keep the induced overhead as low as possible, the garbage collector needs to be extremely efficient. More specifically, the algorithm considered is said to be ``on the fly'': by relying on fine-grained concurrency, the user-threads are never caused to actively wait. The key property we establish is the functional correctness of this garbage collector, i.e. that a cell that a user thread may still access is never reclaimed.We present in a first phase the algorithm considered and its formalisation in Coq by implementing it in a dedicated intermediate representation. We introduce the proof system we used to conduct the proof, a variant based on the well-established Rely-Guarantee logic, and prove the algorithm correct.Reasoning simultaneously over both the garbage collection algorithm itself and the implementation of the concurrent data-structures it uses would entail an undesired additional complexity. The proof is therefore conducted with respect to abstract operations: they take place instantaneously. To justify this simplification, we introduce in a second phase a methodology inspired by the work of Vafeiadis and dedicated to the proof of observational refinement for so-called ``linearisable'' concurrent data-structures. We provide the approach with solid semantic foundations, formalised in Coq. This methodology is instantiated to soundly implement the main data-structure used in our garbage collector.
|
110 |
Dynamic Analysis of Multithreaded Embedded Software to Expose Atomicity ViolationsJanuary 2016 (has links)
abstract: Concurrency bugs are one of the most notorious software bugs and are very difficult to manifest. Significant work has been done on detection of atomicity violations bugs for high performance systems but there is not much work related to detect these bugs for embedded systems. Although criteria to claim existence of bugs remains same, approach changes a bit for embedded systems. The main focus of this research is to develop a systemic methodology to address the issue from embedded systems perspective. A framework is developed which predicts the access interleaving patterns that may violate atomicity using memory references of shared variables and provides support to force and analyze these schedules for any output change, system fault or change in execution path. / Dissertation/Thesis / Masters Thesis Computer Science 2016
|
Page generated in 0.0426 seconds