451 |
Specifying and Verifying Collaborative Behavior in Component-Based SystemsYilmaz, Levent 03 April 2002 (has links)
In a parameterized collaboration design, one views software as a collection of components that play specific roles in interacting, giving rise to collaborative behavior. From this perspective, collaboration designs revolve around reusing collaborations that typify certain design patterns. Unfortunately, verifying that active, concurrently executing components obey the synchronization and communication requirements needed for the collaboration to work is a serious problem. At least two major complications arise in concurrent settings: (1) it may not be possible to analytically identify components that violate the synchronization constraints required by a collaboration, and (2) evolving participants in a collaboration independently often gives rise to unanticipated synchronization conflicts. This work presents a solution technique that addresses both of these problems. Local (that is, role-to-role) synchronization consistency conditions are formalized and associated decidable inference mechanisms are developed to determine mutual compatibility and safe refinement of synchronization behavior. More specifically, given generic parameterized collaborations and components with specific roles, mutual compatibility analysis verifies that the provided and required synchronization models are consistent and integrate correctly. Safe refinement, on the other hand, guarantees that the local synchronization behavior is maintained consistently as the roles and the collaboration are refined during development. This form of local consistency is necessary, but insufficient to guarantee a consistent collaboration overall. As a result, a new notion of global consistency (that is, among multiple components playing multiple roles) is introduced: causal process constraint analysis. A method for capturing, constraining, and analyzing global causal processes, which arise due to causal interference and interaction of components, is presented. Principally, the method allows one to: (1) represent the intended causal processes in terms of interactions depicted in UML collaboration graphs; (2) formulate constraints on such interactions and their evolution; and (3) check that the causal process constraints are satisfied by the observed behavior of the component(s) at run-time. / Ph. D.
|
452 |
Risk-Based Framework for Focused Assessment of System Dynamics ModelsSchwandt, Michael Joseph 28 May 2009 (has links)
The lack of a consistent, rigorous testing methodology has contributed to system dynamics often not being well received in traditional modeling application areas or within the broader research community. With a foundation in taxonomy classification techniques, this research developed a modeling process risk-based framework focused on the objectives of the system dynamics methodology. This approach assists modelers in prioritizing the modeling process risk management requirements — and resources — for a project by employing a modeling process risk dictionary, a modeling process risk management methods database, and an algorithm for selecting methods based on a modeling process risk assessment.
System dynamics benefits from the modeling process risk management approach include more efficient use of risk management resources and more effective management of modeling process risks. In addition, the approach includes qualities that support the achievement of verification, validation, and accreditation (VV&A) principles.
A system dynamics model was developed as the apparatus for assessing the impacts of various modeling process risk management policies, including those found in the traditional system dynamics method, the more commonly practiced method, and the method as modified by the integration of the modeling risk management framework. These policies are defined by common parameters within the model, allowing comparison of system behavior as affected by the policy parameters.
The system dynamics model enabled the testing of the potential value of the system dynamics modeling process framework. Results from a fractional factorial designed experiment identified the sensitive parameters that affect the key result measures established to assess model behavior, focusing on timeliness, effectiveness, and quality. The experimental process highlighted the capabilities of system dynamics modeling to provide insight from the model structure, in addition to the system results. These insights supported assessment of the policies that were tested.
The proposed modeling process risk management policy delivered results that were substantially better than those of the baseline policy. The simulated project was delivered 26% faster, with 49% fewer rework discovery resources, and 1% higher actual work content in the project. The proposed policy also delivered superior results when compared to other common approaches to system dynamics modeling process risk management. / Ph. D.
|
453 |
Study of Equivalence in Systems Engineering within the Frame of VerificationWach, Paul F. 20 January 2023 (has links)
This dissertation contributes to the theoretical foundations of systems engineering (SE) and exposes an unstudied SE area of definition of verification models. In practice, verification models are largely qualitatively defined based on heuristic assumptions rather than science-based approach. For example, we may state the desire for representativeness of a verification model in qualitative terms of low, medium, or high fidelity in early phases of a system lifecycle when verification requirements are typically defined. Given that fidelity is defined as a measure of approximation from reality and that the (real) final product does (or may) not exist in early phases, we are stating desire for and making assumptions of representative equivalence that may not be true. Therefore, this dissertation contends that verification models can and should be defined on the scientific basis of systems theoretic principles.
Furthermore, the practice of SE is undergoing a digital transformation and corresponding desire to enhance SE educationally and as a discipline, which this research proposes to address through a science-based approach that is grounded in the mathematical formalism of systems theory. The maturity of engineering disciplines is reflected in their science-based approach, such as computational fluid dynamics and finite element analysis. Much of the discipline of SE remains qualitatively descriptive, which may suffer from interpretation discrepancies; rather than being grounded in, inherently analytical, theoretical foundations such as is a stated goal of the SE professional organization the International Council on Systems Engineering (INCOSE). Additionally, along with the increased complexity of modern engineered systems comes the impracticality of verification through traditional means, which has resulted in verification being described as broken and in need of theoretical foundations.
The relationships used to define verification models are explored through building on the systems theoretic lineage of A. Wayne Wymore; such as computational systems theory, theory of system design, and theory of problem formulation. Core systems theoretic concepts used to frame the relationship-based definition of verification models are the notions of system morphisms that characterize equivalence between pairs, problem spaces of functions that bound the acceptability of solution systems, and hierarchy of system specification that characterizes stratification. The research inquisition was in regard to how verification models should be defined and hypothesized that verification models should be defined through a combination of systems theoretic relationships between verification artifacts; system requirements, system designs, verification requirements, and verification models.
The conclusions of this research provide a science-based metamodel for defining verification models through systems theoretic principles. The verification models were shown to be indirectly defined from system requirements, through system designs and verification requirements. Verification models are expected to be morphically equivalent to corresponding system designs; however, there may exist infinite equivalence which may be reduced through defining bounding conditions. These bounding conditions were found to be defined through verification requirements that are formed as (1) verification requirement problem spaces that characterize the verification activity on the basis of morphic equivalence to the system requirements and (2) morphic conditions that specify desired equivalence between a system design and verification model. An output of this research is a system theoretic metamodel of verification artifacts, which may be used for a science-based approach to define verification models and advancement of the maturity of the SE discipline. / Doctor of Philosophy / We conduct verification to increase our confidence that the system design will do what is desired as defined in the requirements. However, it is not feasible to perform verification on the final product design within the full scope of the requirements; due to cost, schedule, and availability. As a result, we leverage surrogates, such as verification models, to conduct verification and determine our confidence in the system design.
A challenge to our confidence in the system design exists in that we accept the representativeness of the surrogates based on faith alone; rather than scientific proof. This dissertation defines science-based approach to determining the representativeness of substitutes. In the discipline and practice of systems engineering, verification models serve as substitutes for the system design; and verification requirement problem spaces serve as substitutes the requirements.
The research used mathematical principles to determine representative equivalence and to find that a combination of relationship framing is needed for sufficient selection of verification models. The framing includes relationships to the system being engineered and to the substitute conditions under which the verification model is examined relative to the conditions under which the engineered system is expected to operate. A comparison to the state of the discipline and practice to the research findings was conducted and resulted in confirming unique contribution of the dissertation research. In regard to framing the acceptability of verification models, this research established the foundations for a science-based method to advance the field of Systems Engineering.
|
454 |
Optimizing Distributed Transactions: Speculative Client Execution, Certified Serializability, and High Performance Run-TimePandey, Utkarsh 01 September 2016 (has links)
On-line services already form an important part of modern life with an immense potential for growth. Most of these services are supported by transactional systems, which are backed by database management systems (DBMS) in many cases. Many on-line services use replication to ensure high-availability, fault tolerance and scalability. Replicated systems typically consist of different nodes running the service co-ordinated by a distributed algorithm which aims to drive all the nodes along the same sequence of states by providing a total order to their operations. Thus optimization of both local DBMS operations through concurrency control and the distributed algorithm driving replicated services can lead to enhancing the performance of the on-line services.
Deferred Update Replication (DUR) is a well-known approach to design scalable replicated systems. In this method, the database is fully replicated on each distributed node. User threads perform transactions locally and optimistically before a total order is reached. DUR based systems find their best usage when remote transactions rarely conflict. Even in such scenarios, transactions may abort due to local contention on nodes. A generally adopted method to alleviate the local contention is to invoke a local certification phase to check if a transaction conflicts with other local transactions already completed. If so, the given transaction is aborted locally without burdening the ordering layer. However, this approach still results in many local aborts which significantly degrades the performance.
The first main contribution of this thesis is PXDUR, a DUR based transactional system, which enhances the performance of DUR based systems by alleviating local contention and increasing the transaction commit rate. PXDUR alleviates local contention by allowing speculative forwarding of shared objects from locally committed transactions awaiting total order to running transactions. PXDUR allows transactions running in parallel to use speculative forwarding, thereby enabling the system to utilize the highly parallel multi-core platforms. PXDUR also enhances the performance by optimizing the transaction commit process. It allows the committing transactions to skip read-set validation when it is safe to do so. PXDUR achieves performance gains of an order of magnitude over closest competitors under favorable conditions.
Transactions also form an important part of centralized DBMS, which tend to support multi-threaded access to utilize the highly parallel hardware platforms. The applications can be wrapped in transactions which can then access the DBMS as per the rules of concurrency control. This allows users to develop applications that can run on DBMSs without worrying about synchronization. texttt{Serializability} is the de-facto standard form of isolation required by transactions for many applications. The existing methods employed by DBMSs to enforce serializability employ explicit fine-grained locking. The eager-locking based approach is pessimistic and can be too conservative for many applications.
The locking approach can severely limit the performance of DBMSs especially for scenarios with moderate to high contention. This leads to the second major contribution of this thesis is TSAsR, an adaptive transaction processing framework, which can be applied to DBMSs to improve performance. TSAsR allows the DBMS's internal synchronization to be more relaxed and enforces serializability through the processng of external meta-data in an optimistic manner. It does not require any changes in the application code and achieves orders of magnitude performance improvements for high and moderate contention cases.
The replicated transaction processing systems require a distributed algorithm to keep the system consistent by ensuring that each node executes the same sequence of deterministic commands. These algorithms generally employ texttt{State Machine Replication (SMR)}. Enhancing the performance of such algorithms is a potential way to increase the performance of distributed systems. However, developing new SMR algorithms is limited in production settings because of the huge verification cost involved in proving their correctness.
There are frameworks that allow easy specification of SMR algorithms and subsequent verification. However, algorithms implemented in such framework, give poor performance. This leads to the third major contribution of this thesis Verified JPaxos, a JPaxos based runtime system which can be integrated to an easy to verify I/O automaton based on Multipaxos protocol. Multipaxos is specified in Higher Order Logic (HOL) for ease of verification which is used to generates executable code representing the Multipaxos state changes (I/O Automaton). The runtime drives the HOL generated code and interacts with the service and network to create a fully functional replicated Multipaxos system. The runtime inherits its design from JPaxos along with some optimizations. It achieves significant improvement over a state-of-art SMR verification framework while still being comparable to the performance of non-verified systems. / Master of Science
|
455 |
A taxonomical review of software verification techniques: an illustration using discrete-event simulationWhitner, Richard B. 21 November 2012 (has links)
The use of simulation and modeling as a technique for solving today's complex problems is ever-increasing. Correspondingly, the demands placed on the software which serves as a computer-executable representation of the simulation model are increasing. With the increased complexity of simulation models comes a greater need for model verification, particularly programmed model verification. Unfortunately, current model verification technology is lacking in techniques which satisfy the verification needs. Specifically, there are few guidelines for performing programmed model verification. There is, however, an abundance of software verification techniques which are applicable for simulation model verification. An extensive review of techniques applicable for simulation programmed model verification is presented using the simulation and modeling terminology. A taxonomy for programmed model verification methods is developed. The usefulness of the taxonomy is twofold: (1) the taxonomy provides an approach for applying software verification techniques to the problem of programmed model verification, and (2) the breadth of the taxonomy provides a broad spectrum of perspectives from which to assess the credibility of simulation results. A simulation case study demonstrates the usefulness of the taxonomy and some of the verification techniques.
By referencing this work, one can determine what, and when, techniques can be used throughout the development life cycle. He will know how to perform each technique, how difficult each will be to apply, and how effective the technique will be. The simulation modeler - as well as the software engineer â will find the taxonomy and techniques valuable tools for guiding verification efforts. / Master of Science
|
456 |
Mining Multinode Constraints and Complex Boolean Expressions for Sequential Equivalence CheckingGoel, Neha 13 August 2010 (has links)
Integrated circuit design has progressed significantly over the last few decades. This increasing complexity of hardware systems poses several challenges to the digital hardware verification. Functional verification has become the most expensive and time-consuming task in the overall product development cycle. Almost 70\% of the total verification time is being consumed by design verification and it is projected to worsen further. One of the reasons for this complexity is the synthesis and optimization (automated as well as manual) techniques used to improve performance, area, delay, and other measures have made the final implementation of the design very different from the golden (reference) model. Determining the functional correctness between the reference and implementation using exhaustive simulation can almost always be infeasible. An alternative approach is to prove that the optimized design is functionally equivalent to the reference model, which is known to be functionally correct. The most widely used formal method to perform this process is equivalence checking. The success of combinational equivalence checking (CEC) has contributed to aggressive combinational logic synthesis and optimizations for circuits with millions of logic gates. However, without powerful sequential equivalence checking (SEC) techniques, the potential and extent of sequential optimization is quite limited. In other words, the success of SEC can unleash a plethora of aggressive sequential optimizations that can take circuit design to the next level. Currently, SEC remains extremely difficult compared to CEC, due to the huge search space of the problem.
Sequential Equivalence Checking remains a challenging problem, in this thesis we address the problem using efficient learning techniques. The first approach is to mine missing multi-node patterns from the mining database, verify them and add those proved as true during the unbounded SEC framework. The second approach is to mine powerful and generalized Boolean relationships among flip-flops and internal signals in a sequential circuit using a data mining algorithm. In contrast to traditional learning methods, our mining algorithms can extract illegal state cubes and inductive invariants. These invariants can be arbitrary Boolean expressions and can help in pruning a large don't-care space for equivalence checking.
The two approaches are complementary to each other in nature. One computes the subset of illegal states that cannot occur in the normal function mode and the other approach mines legal constraints that represent the characteristics of the miter circuit and can never be violated. These powerful relations, when added as new constraint clauses to the original formula, help to significantly increase the deductive power for the SAT engine, thereby pruning a larger portion of the search space. Likewise, the memory required and time taken to solve the SEC problem is alleviated. / Master of Science
|
457 |
Strategies for Performance and Quality Improvement of Hardware Verification and Synthesis AlgorithmsElbayoumi, Mahmoud Atef Mahmoud Sayed 24 January 2015 (has links)
According to Moore's law, Integrated Chips (IC) doubles its capacity every 18 months. This causes an exponential increase of the available area, and hence,the complexity of modern digital designs. This consistent enormous gross challenges different research areas in Electronic Design Automation (EDA). Thus, various EDA applications such as equivalence checking, model checking, Automatic Test Pattern Generation (ATPG), functional Bi-decomposition, and technology mapping need to keep pace with these challenges. In this thesis, we are concerned with improving the quality and performance of different EDA algorithms particularly in area of hardware verification and synthesis.
First, we introduce algorithms to manipulate Reduced Ordered Binary Decision Diagrams (ROBDD) on multi-core machines. In order to perform multiple BDD operations concurrently, our algorithm uses a breadth-first search (BFS). As ROBDD algorithms are memory-intensive, maintaining locality of data is an important issue. Therefore, we propose the usage of Hopscotch hashing technique for both Unique Table and BFS Queues to improve the construction time of ROBDD on the parallel platform. Hopscotch hashing technique not only improves the locality of the manipulating data, but also provides a way to cache recently performed BDD operation. Consequently, The time and space usage can be traded off.
Secondly, we used static implications to enhance the performance of SAT-based Bounded Model Checking (BMC) problem. we propose a parallel deduction engine to efficiently utilize low-cost off-shelf multi-core processors to compute the implications. With this engine, we can significantly reduce the computational processing time in analyzing the deduced implications. Secondly, we formulate the clause filter problem as an elegant set-covering problem. Thirdly, we propose a novel greedy algorithm based on the Johnson's algorithm to find the optimal set of clauses that would accelerate BMC solution.
Thirdly, we proposed a novel synthesis paradigm to achieve timing-closure called Timing-Aware CUt Enumeration (TACUE). In TACUE, optimization is conducted through three aspects: First, we propose a new divide-and-conquer strategy that generates multiple sub-cuts on the critical parts of the circuit. Secondly, cut enumeration have been applied in two cutting strategies. In the topology-aware cutting strategy, we preserve the general topology of the circuit by applying TACUE in only self-contained cuts. Meanwhile, the topology-masking cutting strategy investigates circuit cuts beyond their current topology. Thirdly, we proposed an efficient parallel synthesis framework to reduce computation time for synthesizing TACUE sub-cuts. We conducted experiments on large and difficult industrial benchmarks.
Finally, we proposed the first scalable SAT-based approaches for Observability Dont Care (ODC) clock gating. Moreover we intelligently choose those inductive invariants candidates such that their validation will benefit the purpose in clock-gating-based low-power design. / Ph. D.
|
458 |
Strip-Miner: Automatic Bug Detection in Large Software Code with Low False Positive RateIbrar, Fahad 28 April 2020 (has links)
There are a number of techniques for automatic bug detection, most of them have a high false positive rate when used in practice. This work proposes an approach, named Strip-Miner, that combines simple dependency analysis of code with a data mining technique "frequent itemset mining" to reduce the false positive rate. We adopt a two phase approach 1) finding the potential bugs and 2) filtering the false positive ones. In the first phase we extract code elements and dependencies among them using static analysis and frequent itemset mining to find programming patterns where a deviation from these patterns is considered as a potential bug. In the second phase, we use the extracted dependencies to build dependency chains between program elements in a programming pattern and a lack of such a chain potentially makes a bug false positive.
Our evaluation on a set of 7 benchmarks consisting of large software code including OpenSSL, PostgreSQL, Git, FFMPEG, SQLite, Binutils and Putty shows that combining simple de- pendency analysis with pattern mining can significantly decrease the number of generated bugs. Using our approach we are able to reduce the number of generated bugs by up to 99.9% with a false positive rate of 65.19% and true positive rate of 34.18% on average as compared to an earlier frequent itemset mining based approach "PR-Miner". / Master of Science / Every software code has bugs in it that can change its expected behavior. There have been a lot of efforts to automate the process of bug detection but most of the techniques proposed have a high rate of false alarms. Some of these techniques leverage the information available in software code to extract programming patterns that can be used to find potential bugs. Although such an approach has proved to be fruitful for finding bugs but large number of false alarms makes it almost useless in software development.
The elements present in a software code have relationships among them formally known as dependencies and the process of finding them is known as dependency analysis. There is a technique known as market basket analysis used by large retailers to find association between items. It works by looking for combinations of items that occur together frequently in transactions. Similarly, in a software code combinations of elements that occur together, can be used to find association between them. This technique is formally known as frequent itemset mining in the data mining domain. This work proposes an approach, named Strip- Miner, that combines dependency analysis with frequent itemset mining to reduce the rate of false alarms. We adopt a two phase approach 1)finding the potential bugs in code and 2)filtering the false alarms. In the first phase we extract code elements and dependencies among them and use frequent itemset mining to find programming patterns where a deviation from these patterns is considered as a potential bug. In the second phase, we use the extracted dependencies to build dependency chains between program elements present in a programming pattern and lack of such a chain is an indication of false alarm.
Our evaluation on a set of 7 benchmarks consisting of large software code including version control systems, database management systems, software security libraries and utility software like media players shows that combining simple dependency analysis with frequent itemset mining can significantly decrease the rate of false alarms. Using our approach we are able to reduce the number of generated bugs by up to 99.9% with a false alarms rate of 65.19% and real bugs rate of 34.18% on average as compared to an earlier frequent itemset mining based approach "PR-Miner".
|
459 |
Measuring Sustainability: Why and How Public-Private Partnerships Achieve Envision's Platinum VerificationVicchio, Nicolas 11 June 2021 (has links)
Public-private partnerships (P3) are outperforming other delivery methods in their ability to meet sustainability objectives. The main driver for decisions in any project has been seen as linked to a cost analysis. This research aims to determine why and how P3 projects are more likely to implement more effective sustainable decisions throughout a project's lifecycle. In this context, the decision-making is directed explicitly at the project team's reasoning for implementing sustainable practices beyond cost-effectiveness.
The researcher generated questions to ask potential project teams to understand why and how P3 projects were better at implementing sustainable decisions. Sustainability was operationalized using Envision's framework. Interviews with project teams that utilized the P3 project delivery method and received the highest sustainability rating provided a first-hand account of the decision-making process. The interviews are analyzed utilizing framework analysis. The results will identify the motivations for implementing sustainability.
The results suggest that the P3 contract structure is the most compelling reason these projects can implement sustainable decisions better than other project delivery methods. The written requirements from the contract documents or legislative requirements and the Contractor's desire to do a good job are other drivers for this increased sustainability. / Master of Science / The infrastructure in the United States is reaching the end of its useful life. These assets need significant investments to continue serving their original function. Various project delivery methods exist that either deliver a product or a service. Delivery methods such as design-bid-build or design-build focus on delivering an asset such as a bridge or road at the end of the contract. The public-private partnership (P3) delivery method focuses on delivering an asset and operating that asset for an extended period until the end of the contract, typically no less than 25 years. Building these assets sustainably will help drive down costs and increase useful life.
Sustainability goals cover the economic, environmental, and social aspects of the project. The economic goals include providing a responsible cost-benefit to the users or taxpayers and lasting for an extended period. The environmental goals include minimizing the project's impact on the environment. The social goals include building the right project so that it solves the correct community issue.
The P3 delivery method of delivering an asset and service has outperformed other delivery methods in sustainability. This paper explored the reasons that project teams make decisions to include sustainable choices throughout their project contract.
|
460 |
Development of a Thermal Management Methodology for a Front-End DPS Power SupplySewall, Evan Andrew 11 November 2002 (has links)
Thermal management is a rapidly growing field in power electronics today. As power supply systems are designed with higher power density levels, keeping component temperatures within suitable ranges of their maximum operating limits becomes an increasingly challenging task. This project focuses on thermal management at the system level, using a 1.2 kW front-end power converter as a subject for case study. The establishment of a methodology for using the computer code I-deas to computationally simulate the thermal performance of component temperatures within the system was the primary goal.
A series of four benchmarking studies was used to verify the computational predictions. The first test compares predictions of a real system with thermocouple measurements, and the second compares computational predictions with infrared camera and thermocouple measurements on a component mounted to a heat sink. The third experiment involves using flow visualization to verify the presence of vortices in the flow field, and the fourth is a comparison of computational temperature predictions of a DC heater in a controlled flow environment.
A radiation study using the Monte Carlo ray-trace method for radiation heat transfer resulted in the reduction of some component temperature predictions of significant components. This radiation study focused on an aspect of heat transfer that is often ignored in power electronics.
A component rearrangement study was performed to establish a set of guidelines for component placement in future electronic systems. This was done through the use of a test matrix in which the converter layout was varied a number of different ways in order to help determine thermal effects. Based on the options explored and the electrical constraints on the circuit, an optimum circuit layout was suggested for maximum thermal performance.
This project provides a foundation for the thermal management of power electronics at the system level. The use of I-deas as a computational modeling tool was explored, and comparison of the code with experimental measurements helped to explore the accuracy of I-deas as a system level thermal modeling tool. / Master of Science
|
Page generated in 0.1194 seconds