• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 8
  • 7
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 60
  • 15
  • 11
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Deadlock Free Routing in Mesh Networks on Chip with Regions

Holsmark, Rickard January 2009 (has links)
There is a seemingly endless miniaturization of electronic components, which has enabled designers to build sophisticated computing structureson silicon chips. Consequently, electronic systems are continuously improving with new and more advanced functionalities. Design complexity ofthese Systems on Chip (SoC) is reduced by the use of pre-designed cores. However, several problems related to the interconnection of coresremain. Network on Chip (NoC) is a new SoC design paradigm, which targets the interconnect problems using classical network concepts. Still,SoC cores show large variance in size and functionality, whereas several NoC benefits relate to regularity and homogeneity. This thesis studies some network aspects which are characteristic to NoC systems. One is the issue of area wastage in NoC due to cores of varioussizes. We elaborate on using oversized regions in regular mesh NoC and identify several new design possibilities. Adverse effects of regions oncommunication are outlined and evaluated by simulation. Deadlock freedom is an important region issue, since it affects both the usability and performance of routing algorithms. The concept of faultyblocks, used in deadlock free fault-tolerant routing algorithms has similarities with rectangular regions. We have improved and adopted one suchalgorithm to provide deadlock free routing in NoC with regions. This work also offers a methodology for designing topology agnostic, deadlockfree, highly adaptive application specific routing algorithms. The methodology exploits information about communication among tasks of anapplication. This is used in the analysis of deadlock freedom, such that fewer deadlock preventing routing restrictions are required. A comparative study of the two proposed routing algorithms shows that the application specific algorithm gives significantly higher performance.But, the fault-tolerant algorithm may be preferred for systems requiring support for general communication. Several extensions to our work areproposed, for example in areas such as core mapping and efficient routing algorithms. The region concept can be extended for supporting reuse ofa pre-designed NoC as a component in a larger hierarchical NoC.
22

Safe Concurrent Programming and Execution

Pyla, Hari Krishna 05 March 2013 (has links)
The increasing prevalence of multi and many core processors has brought the issues of concurrency and parallelism to the forefront of everyday computing. Even for applications amenable to traditional parallelization techniques, the subtleties of concurrent programming are known to introduce concurrency bugs. Due to the potential of concurrency bugs, programmers find it hard to write correct concurrent code. To take full advantage of parallel shared memory platforms, application programmers need safe and efficient mechanisms that can support a wide range of parallel applications. In addition, a large body of applications are inherently hard-to-parallelize; their data and control dependencies impose execution order constraints that preclude the use of traditional parallelization techniques. Sensitive to their input data, a substantial number of applications fail to scale well, leaving cores idle. To improve the performance of such applications, application programmers need effective mechanisms that can fully leverage multi and many core architectures. These challenges stand in the way of realizing the true potential of emerging many core platforms. The techniques described in this dissertation address these challenges. Specifically, this dissertation contributes techniques to transparently detect and eliminate several concurrency bugs, including deadlocks, asymmetric write-write data races, priority inversion, live-locks, order violations, and bugs that stem from the presence of asynchronous signaling and locks. A second major contribution of this dissertation is a programming framework that exploits coarse-grain speculative parallelism to improve the performance of otherwise hard-to-parallelize applications. / Ph. D.
23

A Framework for Applying Reinforcement Learning to Deadlock Handling in Intralogistics

Müller, Marcel, Reyes-Rubiano, Lorena Silvana, Reggelin, Tobias, Zadek, Hartmut 14 June 2023 (has links)
Intralogistics systems, while complex, are crucial for a range of industries. One of their challenges is deadlock situations that can disrupt operations and decrease efficiency. This paper presents a four-stage framework for applying reinforcement learning algorithms to manage deadlocks in such systems. The stages include Problem Formulation, Model Selection, Algorithm Selection, and System Deployment. We carefully identify the problem, select an appropriate model to represent the system, choose a suitable reinforcement learning algorithm, and finally deploy the solution. Our approach provides a structured method to tackle deadlocks, improving system resilience and responsiveness. This comprehensive guide can serve researchers and practitioners alike, offering a new avenue for enhancing intralogistics performance. Future research can explore the framework’s effectiveness and applicability across different systems.
24

Deadlock detection and avoidance for a class of manufacturing systems

Faiz, Tariq Nadeem January 1996 (has links)
No description available.
25

Stochastic Scheduling for a Network of MEMS Job Shops

Varadarajan, Amrusha 31 January 2007 (has links)
This work is motivated by the pressing need for operational control in the fabrication of Microelectromechanical systems or MEMS. MEMS are miniature three-dimensional integrated electromechanical systems with the ability to absorb information from the environment, process this information and suitably react to it. These devices offer tremendous advantages owing to their small size, low power consumption, low mass and high functionality, which makes them very attractive in applications with stringent demands on weight, functionality and cost. While the system''s "brain" (device electronics) is fabricated using traditional IC technology, the micromechanical components necessitate very intricate and sophisticated processing of silicon or other suitable substrates. A dearth of fabrication facilities with micromachining capabilities and a lengthy gestation period from design to mass fabrication and commercial acceptance of the product in the market are factors most often implicated in hampering the growth of MEMS. These devices are highly application specific with low production volumes and the few fabs that do possess micromachining capabilities are unable to offer a complete array of fabrication processes in order to be able to cater to the needs of the MEMS R&D community. A distributed fabrication network has, therefore, emerged to serve the evolving needs of this high investment, low volume MEMS industry. Under this environment, a central facility coordinates between a network of fabrication centers (Network of MEMS job shops -- NMJS) containing micromachining capabilities. These fabrication centers include commercial, academic and government fabs, which make their services available to the ordinary customer. Wafers are shipped from one facility to another until all processing requirements are met. The lengthy and intricate process sequences that need to be performed over a network of capital intensive facilities are complicated by dynamic job arrivals, stochastic processing times, sequence-dependent set ups and travel between fabs. Unless the production of these novel devices is carefully optimized, the benefits of distributed fabrication could be completely overshadowed by lengthy lead times, chaotic routings and costly processing. Our goal, therefore, is to develop and validate an approach for optimal routing (assignment) and sequencing of MEMS devices in a network of stochastic job shops with the objective of minimizing the sum of completion times and the cost incurred, given a set of fabs, machines and an expected product mix. In view of our goal, we begin by modeling the stochastic NMJS problem as a two-stage stochastic program with recourse where the first-stage variables are binary and the second-stage variables are continuous. The key decision variables are binary and pertain to the assignment of jobs to machines and their sequencing for processing on the machines. The assignment variables essentially fix the route of a job as it travels through the network because these variables specify the machine on which each job-operation must be performed out of several candidate machines. Once the assignment is decided upon, sequencing of job-operations on each machine follows. The assignment and sequencing must be such that they offer the best solution (in terms of the objective) possible in light of all the processing time scenarios that can be realized. We present two approaches for solving the stochastic NMJS problem. The first approach is based on the L-shaped method (credited to van Slyke and Wets, 1969). Since the NMJS problem lacks relatively complete recourse, the first-stage solution can be infeasible to the second-stage problem in that the first stage solution may either violate the reentrant flow conditions or it may create a deadlock. In order to alleviate these infeasibilities, we develop feasibility cuts which when appended to the master problem eliminate the infeasible solution. Alternatively, we also develop constraints to explicitly address these infeasibilities directly within the master problem. We show how a deadlock involving 2 or 3 machines arises if and only if a certain relationship between operations and a certain sequence amongst them exists. We generalize this argument to the case of m machines, which forms the basis for our deadlock prevention constraints. Computational results at the end of Chapter 3 compare the relative merits of a model which relies solely on feasibility cuts with models that incorporate reentrant flow and deadlock prevention constraints within the master problem. Experimental evidence reveals that the latter offers appreciable time savings over the former. Moreover, in a majority of instances we see that models that carry deadlock prevention constraints in addition to the reentrant flow constraints provide at par or better performance than those that solely carry reentrant flow constraints. We, next, develop an optimality cut which when appended to the master problem helps in eliminating the suboptimal master solution. We also present alternative optimality and feasibility cuts obtained by modifying the disjunctive constraints in the subproblem so as to eliminate the big H terms in it. Although any large positive number can be used as the value of H, a conservative estimate may improve computational performance. In light of this, we develop a conservative upper bound for operation completion times and use it as the value of H. Test instances have been generated using a problem generator written in JAVA. We present computational results to evaluate the impact of a conservative estimate for big H on run time, analyze the effect of the different optimality cuts and demonstrate the performance of the multicut method (Wets, 1981) which differs from the L-shaped method in that the number of optimality cuts it appends is equal to the number of scenarios in each iteration. Experimentation indicates that Model 2, which uses the standard optimality cut in conjunction with the conservative estimate for big H, almost always outperforms Model 1, which also uses the standard optimality cut but uses a fixed value of 1000 for big H. Model 3, which employs the alternative optimality cut with the conservative estimate for big H, requires the fewest number of iterations to converge to the optimum but it also incurs the maximum premium in terms of computational time. This is because the alternative optimality cut adds to the complexity of the problem in that it appends additional variables and constraints to the master as well as the subproblems. In the case of Model 4 (multicut method), the segregated optimality cuts accurately reflect the shape of the recourse function resulting in fewer overall iterations but the large number of these cuts accumulate over the iterations making the master problem sluggish and so this model exhibits a variable performance for the various datasets. These experiments reveal that a compact master problem and a conservative estimate for big H positively impact the run time performance of a model. Finally, we develop a framework for a branch-and-bound scheme within which the L-shaped method, as applied to the NMJS problem, can be incorporated so as to further enhance its performance. Our second approach for solving the stochastic NMJS problem relies on the tight LP relaxation observed for the deterministic equivalent of the model. We, first, solve the LP relaxation of the deterministic equivalent problem, and then, fix certain binary assignment variables that take on a value of either a 0 or a 1 in the relaxation. Based on this fixing of certain assignment variables, additional logical constraints have been developed that lead to the fixing of some of the sequencing variables too. Experimental results, comparing the performance of the above LP heuristic procedure with CPLEX over the generated test instances, illustrate the effectiveness of the heuristic procedure. For the largest problems (5 jobs, 10 operations/job, 12 machines, 7 workcenters, 7 scenarios) solved in this experiment, an average savings of as much as 4154 seconds and 1188 seconds was recorded in a comparison with Models 1 and 2, respectively. Both of these models solve the deterministic equivalent of the stochastic NMJS problem but differ in that Model 1 uses a big H value of 1000 whereas Model 2 uses the conservative upper bound for big H developed in this work. The maximum optimality gap observed for the LP heuristic over all the data instances solved was 1.35%. The LP heuristic, therefore, offers a powerful alternative to solving these problems to near-optimality with a very low computational burden. We also present results pertaining to the value of the stochastic solution for various data instances. The observed savings of up to 8.8% over the mean value approach underscores the importance of using a solution that is robust over all scenarios versus a solution that approximates the randomness through expected values. We, next, present a dynamic stochastic scheduling approach (DSSP) for the NMJS problem. The premise behind this undertaking is that in a real-life implementation that is faithful to the two-stage procedure, assignment (routing) and sequencing decisions will be made for all the operations of all the jobs at the outset and these will be followed through regardless of the actual processing times realized for individual operations. However, it may be possible to refine this procedure if information on actual processing time realizations for completed operations could be utilized so that assignment and sequencing decisions for impending operations are adjusted based on the evolving scenario (which may be very different from the scenarios modeled) while still hedging against future uncertainty. In the DSSP approach, the stochastic programming model for the NMJS problem is solved at each decision point using the LP heuristic in a rolling horizon fashion while incorporating constraints that model existing conditions in the shop floor and the actual processing times realized for the operations that have been completed. The implementation of the DSSP algorithm is illustrated through an example problem. The results of the DSSP approach as applied to two large problem instances are presented. The performance of the DSSP approach is evaluated on three fronts; first, by using the LP heuristic at each decision point, second, by using an optimal algorithm at each decision point, and third, against the two-stage stochastic programming approach. Results from the experimentation indicate that the DSSP approach using the LP heuristic at each decision point generates superior assignment and sequencing decisions than the two-stage stochastic programming approach and provides solutions that are near-optimal with a very low computational burden. For the first instance involving 40 operations, 12 machines and 3 processing time scenarios, the DSSP approach using the LP heuristic yields the same solution as the optimal algorithm with a total time savings of 71.4% and also improves upon the two-stage stochastic programming solution by 1.7%. In the second instance, the DSSP approach using the LP heuristic yields a solution with an optimality gap of 1.77% and a total time savings of 98% over the optimal algorithm. In this case, the DSSP approach with the LP heuristic improves upon the two-stage stochastic programming solution by 6.38%. We conclude by presenting a framework for the DSSP approach that extends the basic DSSP algorithm to accommodate jobs whose arrival times may not be known in advance. / Ph. D.
26

Controle acionário compartilhado e solução de impasses: estudo de caso da Companhia Brasileira de Distribuição

Neves, Lara Britto de Almeida Domingues 25 August 2016 (has links)
Submitted by LARA BRITTO (lara@fiedra.com.br) on 2016-10-17T19:22:28Z No. of bitstreams: 3 Lara Britto -Depósio final - após ajustes 2.pdf: 2009370 bytes, checksum: c0710f4f915c2b23fa13be85a076b637 (MD5) Anexo I - CONTRATO_ACORDO DE ASSOCIAÇÃO.pdf: 5491476 bytes, checksum: b4e40f510f19709b3bd2caf05130ede9 (MD5) Anexo II - Acordo de Acionistas.pdf: 3510200 bytes, checksum: 5ed7f246cdedfbd6bb43101b43949cd1 (MD5) / Approved for entry into archive by Joana Martorini (joana.martorini@fgv.br) on 2016-10-18T12:07:48Z (GMT) No. of bitstreams: 3 Lara Britto -Depósio final - após ajustes 2.pdf: 2009370 bytes, checksum: c0710f4f915c2b23fa13be85a076b637 (MD5) Anexo I - CONTRATO_ACORDO DE ASSOCIAÇÃO.pdf: 5491476 bytes, checksum: b4e40f510f19709b3bd2caf05130ede9 (MD5) Anexo II - Acordo de Acionistas.pdf: 3510200 bytes, checksum: 5ed7f246cdedfbd6bb43101b43949cd1 (MD5) / Made available in DSpace on 2016-10-18T13:52:29Z (GMT). No. of bitstreams: 3 Lara Britto -Depósio final - após ajustes 2.pdf: 2009370 bytes, checksum: c0710f4f915c2b23fa13be85a076b637 (MD5) Anexo I - CONTRATO_ACORDO DE ASSOCIAÇÃO.pdf: 5491476 bytes, checksum: b4e40f510f19709b3bd2caf05130ede9 (MD5) Anexo II - Acordo de Acionistas.pdf: 3510200 bytes, checksum: 5ed7f246cdedfbd6bb43101b43949cd1 (MD5) Previous issue date: 2016-08-25 / This master's thesis aims at investigating the corporate shared control, specifically about potential deadlocks in corporate decisions that might occur due to the corporation’s share distribution structure, it’s organizational structure, or contractual arrangements undertaken by parties. The approach to the subject comes from the study of a real case (the case of the Companhia Brasileira de Distribuição). The research starts with the systematization of case data, a critical assessment of how the Company control shared between Abilio Diniz group and Casino group was structured, in 2005, from the point of view of form and content. The study aims to extract lessons, with generalization potential, on mechanisms for breaking deadlock where business dispute has arisen because there is no consensus in decision making. Part I is descriptive, and aims to make the reader aware of the context of the negotiation transactions and the proposal of shared control by the major shareholders, that was concentrated in two agreements: a 'joint venture agreement' and a 'shareholder’s agreement'. Part II has an analytical purpose and seeks to compare actual data vis-a-vis theoretical positions as it is found in the legal literature. / A presente dissertação de mestrado tem por objetivo investigar o compartilhamento de controle societário, especificamente em relação a possíveis impasses nas deliberações sociais suscetíveis de sobrevir em decorrência da composição societária, estrutura organizacional, ou mecanismos contratuais eleitos pelas partes. A abordagem é à luz de um estudo de um caso real (caso da Companhia Brasileira de Distribuição). Busca-se, então, a partir da sistematização de dados do caso, uma avaliação crítica sobre como foi estruturado, no ano de 2005, o compartilhamento de controle dessa Companhia entre o grupo de Abilio Diniz e o grupo Casino, do ponto de vista da forma e do conteúdo. O estudo direciona-se para lições, com potencial de generalização, sobre procedimentos de resolução de impasses em deliberações de órgãos de administração da sociedade quando, exigido o consenso, não há consenso no processo decisório. A Parte I do trabalho é descritiva, e objetiva inteirar o leitor do contexto da transação negocial e da proposta de compartilhamento entre os dois grupos, resumidamente concentrada em dois instrumentos contratuais: um acordo de associação e um acordo de acionistas. A Parte II tem finalidade analítica e busca contrapor dados reais do caso vis-a-vis posições teóricas encontradas na literatura jurídica.
27

Targeted Client Synthesis for Detecting Concurrency Bugs

Samak, Malavika January 2016 (has links) (PDF)
Detecting concurrency bugs can be challenging due to the intricacies associated with their manifestation. These intricacies correspond to identifying the methods that need to be invoked concurrently, the inputs passed to these methods and the interleaving of the threads that cause the erroneous behavior. Neither fuzzing-based testing techniques nor over-approximate static analyses are well positioned to detect subtle concurrency defects while retaining high accuracy alongside satisfactory coverage. While dynamic analysis techniques have been proposed to overcome some of the challenges in detecting concurrency bugs, we observe that their success is critically dependent on the availability of effective multithreaded clients. Without a priori knowledge of the defects, manually constructing defect-revealing multithreaded clients is non-trivial. In this thesis, we design an approach to address the problem of automatically generate clients for detecting concurrency bugs in multithreaded libraries. The key insight underlying our design is that a subset of the properties observed when the defects manifest in a concur-rent execution can also be observed in a sequential execution. The input to our approach is a library implementation and a sequential testsuite, and the output is a set of multithreaded clients that can be used to reveal defects in the input library implementation. Dynamic defect detectors can execute the clients and analyze the resulting traces to report various kinds of defects including deadlocks, data races and atomicity violations. Furthermore, the clients can also be used by testing frameworks to report assertion violations. We propose two variants of our design – (a) path-agnostic client generation, and (b) path-aware client generation. The path-agnostic client generation process helps in detection of potential bugs present in the paths executed by the input sequential testsuite. It does not attempt to explore newer paths by satisfying path conditions either by modifying the input or by scheduling the threads appropriately. The generated clients are used to expose deadlocks, data races and atomicity violations. Our analysis analyzes the execution traces obtained from executing the input sequential clients and produces a concurrent client program that drives shared objects via library methods calls to states conducive for triggering deadlocks, data races or atomicity violations. For path-aware client generation, our approach explores newer paths that are not covered by the input sequential testsuite to generate clients. For this purpose, we design a directed, iterative and scalable engine that combines the strengths of static and dynamic analysis to help synthesize both multithreaded clients and schedules that violate complex correctness conditions expressed by the developer. Apart from the library implementation and the sequential testsuite as input, this engine also accepts a specification of correctness as input. Then, it iteratively refines each client from the input sequential testsuite to generate an ex-ecution that can break the input specification. Each step of the iterative process includes statically identifying sub-goals towards the goal of failing the specification, generating a plan toward meeting these goals, and merging of the paths traversed dynamically with the plan computed statically via constraint solving to generate a new client. The engine reports full reproduction scenarios, guaranteed to be true, for the bugs it finds. We have implemented prototypes that incorporate the aforementioned ideas and validated them by applying them on 29 well-tested concurrent classes from popular Java libraries, including the latest version of JDK. We are able to automatically generate clients that helped expose more than 300 concurrency bugs including deadlocks, data races, atomicity violations and assertion violations. We reported many previously unknown bugs to the developers of these libraries resulting in either fixes to the code or changes to the documentation pertaining to the thread-safe behavior of the relevant classes. On average, the time taken to analyze a class and generate clients for it is less than two minutes. We believe that the demonstrated effectiveness of our prototypes in helping expose deep bugs in popular Java libraries makes the design, proposed in this thesis, a vital cog in the future development and deployment of dynamic concurrency bug detectors.
28

Supervisor Synthesis for Automated Manufacturing Systems Based on Structure Theory of Petri Nets. / Synthèse de contrôleurs de Systèmes de production automatisés basés sur la théorie structurelle des réseaux de Petri.

Liu, Gaiyun 27 December 2014 (has links)
Le contrôle de systèmes industriels à cause de l’automatisation et la réduction de nombre des opérateurs devient un enjeu crucial. Les systèmes de production automatisés (AMS) sont d’autant plus touchés car une défaillance du programme de contrôle peut réduire considérablement la productivité voire entraîner l’arrêt du système de production. Pour certains de ces systèmes où le partage des ressources est pondérant, la notion de blocage partiel ou global est fréquente et la validation avant implantation est préférable pour réduire les risques.En raison de la capacité des réseaux de Petri à décrire aisément l’exécution concurrente des processus et le partage des ressources, de nombreuses méthodes de vérification d’absence de blocage et de synthèse de contrôleurs basées sur la théorie structurelle ou le graphe d’accessibilité des réseaux de Petri ont été proposées au cours des deux dernières décennies.Traditionnellement, une méthode de prévention de blocage est évaluée selon trois critères de performance: la complexité structurelle, la permissivité comportementale, et la complexité de calcul. Les méthodes fondées sur l’espace d’état aboutissent généralement à un contrôle maximal permissif mais souffrent de l'explosion combinatoire de l'espace d'états. En revanche, les méthodes de synthèse de contrôleurs fondées sur l’analyse structurelle évitent le problème de l’explosion de l’espace d’état mais aboutissent à des superviseurs pouvant restreindre considérablement les comportements admissibles du système. De plus si la théorie structurelle de contrôle de siphons pour la synthèse des superviseurs est mature dans le cas des réseaux de Petri ordinaires, elle est en développement pour les réseaux de Petri généralises. Par ailleurs, la plupart des travaux existants partent du principe que les ressources sont constamment disponibles. Or l’indisponibilité de ressources est en réalité un phénomène ordinaire. Il serait donc judicieux de développer une politique de vérification de blocage qui soit efficace tout en considérant des ressources non fiables.Cette thèse vise principalement à faire face aux limitations mentionnées ci-dessus. Nos principales contributions à la fois théoriques et algorithmiques sont les suivantes.Premièrement, après avoir revisité les conditions de contrôlabilité des siphons (cs–propriété) et précisé les limitations de la max cs- propriété et max’ cs- propriété, nous définissons la max’’ cs-propriété et nous démontrons que cette nouvelle propriété est une condition non seulement suffisante mais aussi nécessaire pour la vivacité de la classe des GS3PR (Generalized Systems of SimpleSequential Processes with Resources).Par la suite nous montrons comment le problème de la vérification de cette propriété et donc la vivacité des GS3PR peut se ramener à la résolution d’un programme linéaire en nombre entiers.Dans une seconde partie, nous proposons une classe de réseaux de Petri appelée M-Nets dotée d’une forte capacité de modélisation des systèmes de production automatisés. En combinant la théorie du contrôle siphon avec la théorie des régions, nous développons une méthode de prévention de blocage ayant un bon compromis entre l'optimalité du comportement et la complexité de calcul. De plus, nous proposons une méthode de synthèse d'un contrôleur maximal permissif pour une sous-classe de réseaux notée b-nets.Enfin, nous proposons dans cette thèse une méthode de conception d’un superviseur de systèmes de production automatisés où les ressources ne sont pas toutes fiables et particulièrement efficace pour la classe des S3PR (Systems of Simple Sequential Processes with Resources). / Because of automation and reduction of the number of operators, the control of industrial systems is becoming a critical issue. For automated manufacturing systems (AMS) where resource sharing is preponderant, the notion of partial or total blocking is frequent and validation before implementation is preferable to reduce the risks.Due to the easy and concise description of the concurrent execution of processes and the resource sharing by Petri nets, many methods to verify deadlock-freeness and to synthesize controllers using structural theory or reachability graph have been proposed over the past two decades.Traditionally, a deadlock control policy can be evaluated by three performance criteria : structural complexity, behavioral permissiveness, and computational complexity. Generally, deadlock control policies based on the state space analysis can approach the maximal permissive behavior, but suffer from the state explosionproblem. On the contrary deadlock control policies based on the structural analysis of Petri nets avoid in general the state explosion problem successfully, but cannot lead to the maximally or near maximally permissive controller. Morover, the current Deadlock control theory based on siphons is fairly mature for ordinary Petri nets,while for generalized Petri nets, it is presently at an early stage.On the other hand, most deadlock control policies based on Petri nets for AMS proceed on the premise that the resources in a system under consideration are reliable. Actually, resource failures are inevitable and common in most AMS, which may also cause processes to halt. Therefore, it is judicious to develop an effective and robust deadlock control policy considering unreliable resources.This thesis aims to cope with the limitations mentioned above. Our main theoretical and algorithmic contributions are the following. Firstly, after revisiting the controllability conditions of siphons and limitations of max and max' controlled-siphon properties, we define the max'' cs property and we prove that this new cs-property is not only sufficient but also a necessary liveness condition forgeneralized systems of simple sequential processes with resources (GS3PR). Moreover, we show how the checking of this property and hence liveness of GS3PR nets can be translaled into resolution of an integer programming (IP) model.Secondly, we propose a class of manufacturing-oriented Petri nets, M-nets for short, with strong modeling capability. Combining siphon control and the theory of regions, we develop a deadlock prevention method that makes a good trade-off between behavioral optimality and computational tractability Moreover, this thesis proposes a maximally permissive control policy for a subclass of Petri nets (calledBéta-nets) based on the token distribution pattern of unmarked siphons.Finally, we propose a designs method for robust liveness-enforcingsupervisors for AMS with unreliable resources appropriate in particular for systems of simple sequential processes with resources(S3PR)
29

Verification of sequential and concurrent libraries

Deshmukh, Jyotirmoy Vinay 02 August 2011 (has links)
The goal of this dissertation is to present new and improved techniques for fully automatic verification of sequential and concurrent software libraries. In most cases, automatic software verification is plagued by undecidability, while in many others it suffers from prohibitively high computational complexity. Model checking -- a highly successful technique used for verifying finite state hardware circuits against logical specifications -- has been less widely adapted for software, as software verification tends to involve reasoning about potentially infinite state-spaces. Two of the biggest culprits responsible for making software model checking hard are heap-allocated data structures and concurrency. In the first part of this dissertation, we study the problem of verifying shape properties of sequential data structure libraries. Such libraries are implemented as collections of methods that manipulate the underlying data structure. Examples of such methods include: methods to insert, delete, and update data values of nodes in linked lists, binary trees, and directed acyclic graphs; methods to reverse linked lists; and methods to rotate balanced trees. Well-written methods are accompanied by documentation that specifies the observational behavior of these methods in terms of pre/post-conditions. A pre-condition [phi] for a method M characterizes the state of a data structure before the method acts on it, and the post-condition [psi] characterizes the state of the data structure after the method has terminated. In a certain sense, we can view the method as a function that operates on an input data structure, producing an output data structure. Examples of such pre/post-conditions include shape properties such as acyclicity, sorted-ness, tree-ness, reachability of particular data values, and reachability of pointer values, and data structure-specific properties such as: "no red node has a red child'', and "there is no node with data value 'a' in the data structure''. Moreover, methods are often expected not to violate certain safety properties such as the absence of dangling pointers, absence of null pointer dereferences, and absence of memory leaks. We often assume such specifications as implicit, and say that a method is incorrect if it violates such specifications. We model data structures as directed graphs, and use the two terms interchangeably. Verifying correctness of methods operating on graphs is an instance of the parameterized verification problem: for every input graph that satisfies [phi], we wish to ensure that the corresponding output graph satisfies [psi]. Control structures such as loops and recursion allow an arbitrary method to simulate a Turing Machine. Hence, the parameterized verification problem for arbitrary methods is undecidable. One of the main contributions of this dissertation is in identifying mathematical conditions on a programming language fragment for which parameterized verification is not only decidable, but also efficient from a complexity perspective. The decidable fragment we consider can be broadly sub-divided into two categories: the class of iterative methods, or methods which use loops as a control flow construct to traverse a data structure, and the class of recursive methods, or methods that use recursion to traverse the data structure. We show that for an iterative method operating on a directed graph, if we are guaranteed that if the number of destructive updates that a method performs is bounded (by a constant, i.e., O(1)), and is guaranteed to terminate, then the correctness of the method can be checked in time polynomial in the size of the method and its specifications. Further, we provide a well-defined syntactic fragment for recursive methods operating on tree-like data structures, which assures that any method in this fragment can be verified in time polynomial in the size of the method and its specifications. Our approach draws on the theory of tree automata, and we show that parameterized correctness can be reduced to emptiness of finite-state, nondeterministic tree automata that operate on infinite trees. We then leverage efficient algorithms for checking the emptiness of such tree automata to obtain a tractable verification framework. Our prototype tool demonstrates the low theoretical complexity of our technique by efficiently verifying common methods that operate on data structures. In the second part of the dissertation, we tackle another obstacle for tractable software verification: concurrency. In particular, we explore application of a static analysis technique based on interprocedural dataflow analysis to predict and document deadlocks in concurrent libraries, and analyze deadlocks in clients that use such libraries. The kind of deadlocks that we focus result from circular dependencies in the acquisition of shared resources (such as locks). Well-written applications that use several locks implicitly assume a certain partial order in which locks are acquired by threads. A cycle in the lock acquisition order is an indicator of a possible deadlock within the application. Methods in object-oriented concurrent libraries often encapsulate internal synchronization details. As a result of information hiding, clients calling the library methods may cause thread safety violations by invoking methods in a manner that violates the partial ordering between lock acquisitions that is implicit within the library. Given a concurrent library, we present a technique for inferring interface contracts that speciy permissible concurrent method calls and patterns of aliasing among method arguments that guarantee deadlock-free execution for the methods in the library. The contracts also help client developers by documenting required assumptions about the library methods. Alternatively, the contracts can be statically enforced in the client code to detect potential deadlocks in the client. Our technique combines static analysis with a symbolic encoding for tracking lock dependencies, allowing us to synthesize contracts using a satisfiability modulo theories (SMT) solver. Additionally, we investigate extensions of our technique to reason about deadlocks in libraries that employ signalling primitives such as wait-notify for cooperative synchronization. We demonstrate its scalability and efficiency with a prototype tool that analyzed over a million lines of code for some widely-used open-source Java libraries in less than 50 minutes. Furthermore, the contracts inferred by our approach have been able to pinpoint real bugs, i.e. deadlocks that have been reported by users of these libraries. / text
30

An extension of a tool for the formal support for component-based development

Pereira, Dalay Israel de Almeida 18 August 2017 (has links)
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2017-11-01T21:17:46Z No. of bitstreams: 1 DalayIsraelDeAlmeidaPereira_DISSERT.pdf: 1298858 bytes, checksum: 8ff640d45df0332e0a20a3f4476198ce (MD5) / Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2017-11-07T19:51:30Z (GMT) No. of bitstreams: 1 DalayIsraelDeAlmeidaPereira_DISSERT.pdf: 1298858 bytes, checksum: 8ff640d45df0332e0a20a3f4476198ce (MD5) / Made available in DSpace on 2017-11-07T19:51:30Z (GMT). No. of bitstreams: 1 DalayIsraelDeAlmeidaPereira_DISSERT.pdf: 1298858 bytes, checksum: 8ff640d45df0332e0a20a3f4476198ce (MD5) Previous issue date: 2017-08-18 / Utilizando a abordagem de desenvolvimento baseado em componentes, a complexidade dos sistemas ? reduzida e a sua manuten??o ? facilitada, trazendo mais seguran?a e reuso dos componentes. Por?m, a composi??o dos componentes (e suas intera??es) ainda ? uma grande fonte de problemas e requer uma an?lise mais detalhada. Esse problema ? ainda mais relevante quando lidamos com aplica??es cr?ticas. Uma abordagem para especificar esse tipo de aplica??o ? o uso de M?todos Formais, uma metodologia precisa para a especifica??o de sistemas, que possui uma base matem?tica forte e que traz, entre outros benef?cios, mais seguran?a. Como exemplo, o m?todo formal CSP permite a especifica??o de sistemas concorrentes e a verifica??o de propriedades inerentes a esses sistemas. CSP disp?e de um conjunto de ferramentas para a sua verifica??o, como, por exemplo, FDR. Usando CSP ? poss?vel indentificar e resolver problemas como deadlock e livelock em um sistema, muito embora isso possa ser custoso em termos de tempo gasto em verifica??es. Nesse contexto, BRIC surge como uma abordagem baseada em CSP para o desenvolvimento de sistemas baseados em componentes, garantindo a aus?ncia de deadlock e livelock por constru??o. Essa abordagem usa CSP para especificar restri??es e intera??es entre componentes de maneira a permitir uma verifica??o formal do sistema. Uma extens?o de BRIC, BRICK , prop?e adicionar metadados aos componentes a fim de diminuir a complexidade e a quantidade das verifica??es feitas quando componentes s?o compostos. Por?m, a aplica??o pr?tica dessa abordagem pode se tornar muito complexa e cansativa se feita manualmente. Com o objetivo de automatizar o uso da abordagem BRICK , foi desenvolvida anteriormente uma ferramenta (BTS - BRICK Tool Support) que automatiza as verifica??es das composi??es dos componentes gerando e verificando automaticamente as condi??es impostas pela abordagem utilizando FDR. Por?m, devido ao n?mero e ? complexidade das verifica??es feitas em FDR, a ferramenta pode levar ainda muito tempo nesse processo. Esta disserta??o apresenta uma extens?o ? BTS que melhora o modo como s?o feitas as verifica??es, substituindo o FDR utilizado na ferramenta pela sua mais recente vers?o e adicionando um provador SMT que, concorrentemente, verifica algumas das propriedades da aplica??o. N?s tamb?m adaptamos a ferramenta para ser usada na especifica??o de um maior n?mero de sistemas e avaliamos a ferramenta estendida com dois estudos de caso, comparando as verifica??es feitas na vers?o anterior da ferramenta com a nossa nova abordagem de verifica??o. / Using the component-based development approach, the system complexity is reduced and its maintenance is facilitated, bringing more reliability and reuse of components. However, the composition of components (and their interactions) is still a significant source of problems and requires a more detailed analysis. This problem is even more relevant when dealing with safety-critical applications. An approach for specifying this kind of applications is using Formal Methods, which are a precise methodology for system specification that has strong mathematical background which brings, among other benefits, more safety. As an example, the formal method CSP allows the specification of concurrent systems and the verification of properties inherent to such systems. CSP has a set of tools for verification, like, for instance, FDR. Using CSP, one can detect and solve problems like deadlock and livelock in a system, although it can be costly in terms of the time spent in verifications. In this context, BRICK has emerged as a CSP based approach for developing componentbased systems, which guarantees deadlock and livelock freedom by construction. This approach uses CSP to specify the constraints and interactions between the components to allow a formal verification of the system. An extension to BRIC, BRICK , makes use of metadata as part of the components in order to decrease the complexity and the quantity of verifications made when composing components. However, the practical use of this approach can be too complex and cumbersome. In order to automate the use of the BRICK approach a tool has been previously developed (BTS - BRICK Tool Support), which automates the verifications of component compositions by automatically generating and checking the side conditions imposed by the approach using FDR. Nevertheless, due to the number and complexity of the verifications made in FDR, the tool can still take too much time in this process. In this dissertation, we present an extension to BTS that improves the way how it make verifications by replacing the FDR used inside the tool by its most recent version and adding a SMT-solver, that, concurrently, checks some properties of the specification. We also adapted the tool in order to be used for the specification of a greater number of systems and we evaluated the extended tool with two case studies, comparing the verifications made in the older version of the tool with this new approach of verification.

Page generated in 0.4306 seconds