• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 853
  • 125
  • 116
  • 106
  • 63
  • 24
  • 24
  • 20
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1745
  • 413
  • 355
  • 292
  • 265
  • 256
  • 252
  • 219
  • 211
  • 190
  • 176
  • 169
  • 123
  • 121
  • 120
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Clausal resolution for branching-time temporal logic

Bolotov, Alexander January 2000 (has links)
No description available.
32

Early Verification of the Power Delivery Network in Integrated Circuits

Abdul Ghani, Nahi 05 January 2012 (has links)
The verification of power grids in modern integrated circuits must start early in the design process when adjustments can be most easily incorporated. We adopt an existing early verification framework. The framework is vectorless, i.e., it does not require input test patterns and does not rely on simulating the power grid subject to these patterns. In this framework, circuit uncertainty is captured via a set of current constraints that capture what may be known or specified from circuit behavior. Grid verification becomes a question of finding the worst-case grid behavior which, in turn, entails the solution of linear programs (LPs) whose size and number is proportional to the size of the grids. The thesis builds on this systematic framework for dealing with circuit uncertainty with the aim of improving efficiency and expanding the capabilities handled within. One contribution introduces an efficient method based on a sparse approximate inverse technique to greatly reduce the size of the required linear programs while ensuring a user-specified over-estimation margin on the exact solution. The application of the method is exhibited under both R and RC grid models. Another contribution first extends grid verification under RC grid models to also check for the worst-case branch currents. This would require as many LPs as there are branches. Then, it shows how to adapt the approximate inverse technique to speed up the branch current verification process. A third contribution proposes a novel approach to reduce the number of LPs in the voltage drop and branch current verification problems. This is achieved by examining dominance relations among node voltage drops and among branch currents. This allows us to replace a group of LPs by one conservative and tight LP. A fourth contribution proposes an efficient verification technique under RLC models. The proposed approach provides tight conservative bounds on the maximum and minimum worst-case voltage drops at every node on the grid.
33

Early Verification of the Power Delivery Network in Integrated Circuits

Abdul Ghani, Nahi 05 January 2012 (has links)
The verification of power grids in modern integrated circuits must start early in the design process when adjustments can be most easily incorporated. We adopt an existing early verification framework. The framework is vectorless, i.e., it does not require input test patterns and does not rely on simulating the power grid subject to these patterns. In this framework, circuit uncertainty is captured via a set of current constraints that capture what may be known or specified from circuit behavior. Grid verification becomes a question of finding the worst-case grid behavior which, in turn, entails the solution of linear programs (LPs) whose size and number is proportional to the size of the grids. The thesis builds on this systematic framework for dealing with circuit uncertainty with the aim of improving efficiency and expanding the capabilities handled within. One contribution introduces an efficient method based on a sparse approximate inverse technique to greatly reduce the size of the required linear programs while ensuring a user-specified over-estimation margin on the exact solution. The application of the method is exhibited under both R and RC grid models. Another contribution first extends grid verification under RC grid models to also check for the worst-case branch currents. This would require as many LPs as there are branches. Then, it shows how to adapt the approximate inverse technique to speed up the branch current verification process. A third contribution proposes a novel approach to reduce the number of LPs in the voltage drop and branch current verification problems. This is achieved by examining dominance relations among node voltage drops and among branch currents. This allows us to replace a group of LPs by one conservative and tight LP. A fourth contribution proposes an efficient verification technique under RLC models. The proposed approach provides tight conservative bounds on the maximum and minimum worst-case voltage drops at every node on the grid.
34

A methodology for modeling the verification, validation, and testing process for launch vehicles

Sudol, Alicia 07 January 2016 (has links)
Completing the development process and getting to first flight has become a difficult hurdle for launch vehicles. Program cancellations in the last 30 years were largely due to cost overruns and schedule slips during the design, development, testing and evaluation (DDT&E) process. Unplanned rework cycles that occur during verification, validation, and testing (VVT) phases of development contribute significantly to these overruns, accounting for up to 75% of development cost. Current industry standard VVT planning is largely subjective with no method for evaluating the impact of rework. The goal of this research is to formulate and implement a method that will quantitatively capture the impact of unplanned rework by assessing the reliability, cost, schedule, and risk of VVT activities. First, the fidelity level of each test is defined and the probability of rework between activities is modeled using a dependency structure matrix. Then, a discrete event simulation projects the occurrence of rework cycles and evaluates the impact on reliability, cost, and schedule for a set of VVT activities. Finally, a quadratic risk impact function is used to calculate the risk level of the VVT strategy based on the resulting output distributions. This method is applied to alternative VVT strategies for the Space Shuttle Main Engine to demonstrate how the impact of rework can be mitigated, using the actual test history as a baseline. Results indicate rework cost to be the primary driver in overall project risk, and yield interesting observations regarding the trade-off between the upfront cost of testing and the associated cost of rework. Ultimately, this final application problem demonstrates the merits of this methodology in evaluating VVT strategies and providing a risk-informed decision making framework for the verification, validation, and testing process of launch vehicle systems.
35

On verification and controller synthesis for probabilistic systems at runtime

Ujma, Mateusz January 2015 (has links)
Probabilistic model checking is a technique employed for verifying the correctness of computer systems that exhibit probabilistic behaviour. A related technique is controller synthesis, which generates controllers that guarantee the correct behaviour of the system. Not all controllers can be generated offline, as the relevant information may only be available when the system is running, for example, the reliability of services may vary over time. In this thesis, we propose a framework based on controller synthesis for stochastic games at runtime. We model systems using stochastic two-player games parameterised with data obtained from monitoring of the running system. One player represents the controllable actions of the system, while the other player represents the hostile uncontrollable environment. The goal is to synthesize, for a given property specification, a controller for the first player that wins against all possible actions of the environment player. Initially, controller synthesis is invoked for the parameterised model and the resulting controller is applied to the running system. The process is repeated at runtime when changes in the monitored parameters are detected, whereby a new controller is generated and applied. To ensure the practicality of the framework, we focus on its three important aspects: performance, robustness, and scalability. We propose an incremental model construction technique to improve performance of runtime synthesis. In many cases, changes in monitored parameters are small and models built for consecutive parameter values are similar. We exploit this and incrementally build a model for the updated parameters reusing the previous model, effectively saving time. To address robustness, we develop a technique called permissive controller synthesis. Permissive controllers generalise the classical controllers by allowing the system to choose from a set of actions instead of just one. By using a permissive controller, a computer system can quickly adapt to a situation where an action becomes temporarily unavailable while still satisfying the property of interest. We tackle the scalability of controller synthesis with a learning-based approach. We develop a technique based on real-time dynamic programming which, by generating random trajectories through a model, synthesises an approximately optimal controller. We guide the generation using heuristics and can guarantee that, even in the cases where we only explore a small part of the model, we still obtain a correct controller. We develop a full implementation of these techniques and evaluate it on a large set of case studies from the PRISM benchmark suite, demonstrating significant performance gains in most cases. We also illustrate the working of the framework on a new case study of an open-source stock monitoring application.
36

Mechanization of program construction in Martin-Loef's theory of types

Ireland, Andrew January 1989 (has links)
No description available.
37

Concurrent verification for sequential programs

Wickerson, John Peter January 2013 (has links)
This dissertation makes two contributions to the field of software verification. The first explains how verification techniques originally developed for concurrency can be usefully applied to sequential programs. The second describes how sequential programs can be verified using diagrams that have a parallel nature. The first contribution involves a new treatment of stability in verification methods based on rely-guarantee. When an assertion made in one thread of a concurrent system cannot be invalidated by the actions of other threads, that assertion is said to be 'stable'. Stability is normally enforced through side-conditions on rely-guarantee proof rules. This dissertation proposes instead to encode stability information into the syntactic form of the assertion. This approach, which we call explicit stabilisation, brings several benefits. First, we empower rely-guarantee with the ability to reason about library code for the first time. Second, when the rely-guarantee method is redepleyed in a sequential setting, explicit stabilisation allows more details of a module's implementation to be hidden when verifying clients. Third, explicit stabilisation brings a more nuanced understanding of the important issue of stability in concurrent and sequential verification; such an understanding grows ever more important as verification techniques grow ever more complex. The second contribution is a new method of presenting program proofs conducted in separation logic. Building on work by Jules Bean, the ribbon proof is a diagrammatic alternative to the standard 'proof outline'. By emphasising the structure of a proof, ribbon proofs are intelligible and hence useful pedagogically. Because they contain less redundancy than proof outlines, and allow each proof step to be checked locally, they are highly scalable; this we illustrate with a ribbon proof of the Version 7 Unix memory manager. Where proof outlines are cumbersome to modify, ribbon proofs can be visually manoeuvred to yield proofs of variant programs. We describe the ribbon proof system, prove its soundness and completeness, and outline a prototype tool for mechanically checking the diagrams it produ1res.
38

Timed Refinement for Verification of Real-Time Object Code Programs

Dubasi, Mohana Asha Latha January 2018 (has links)
Real-time systems such as medical devices, surgical robots, and microprocessors are safety- critical applications that have hard timing constraint. The correctness of real-time systems is important as the failure may result in severe consequences such as loss of money, time and human life. These real-time systems have software to control their behavior. Typically, these software have source code which is converted to object code and then executed in safety-critical embedded devices. Therefore, it is important to ensure that both source code and object code are error-free. When dealing with safety-critical systems, formal verification techniques have laid the foundation for ensuring software correctness. Refinement based technique in formal verification can be used for the verification of real- time interrupt-driven object code. This dissertation presents an automated tool that verifies the functional and timing correctness of real-time interrupt-driven object code programs. The tool has been developed in three stages. In the first stage, a novel timed refinement procedure that checks for timing properties has been developed and applied on six case studies. The required model and an abstraction technique were generated manually. The results indicate that the proposed abstraction technique reduces the size of the implementation model by at least four orders of magnitude. In the second stage, the proposed abstraction technique has been automated. This technique has been applied to thirty different case studies. The results indicate that the automated abstraction technique can easily reduce the model size, which would in turn significantly reduce the verification time. In the final stage, two new automated algorithms are proposed which would check the functional properties through safety and liveness. These algorithms were applied to the same thirty case studies. The results indicate that the functional verification can be performed in less than a second for the reduced model. The benefits of automating the verification process for real-time interrupt-driven object code include: 1) the overall size of the implementation model has reduced significantly; 2) the verification is within a reasonable time; 3) can be applied multiple times in the system development process. / Several parts of this dissertation was funded by a grant from the United States Government and the generous support of the American people through the United States Department of State and the United States Agency for International Development (USAID) under the Pakistan – U.S. Science & Technology Cooperation Program. The contents do not necessarily reflect the views of the United States Government.
39

PARALLELIZED ROBUSTNESS COMPUTATION FOR CYBER PHYSICALSYSTEMS VERIFICATION

Cralley, Joseph 01 May 2020 (has links)
Failures in cyber physical systems can be costly in terms of money and lives. The marsclimate orbiter alone had a mission cost of 327.6 million USD which was almostcompletely wasted do to an uncaught design flaw. This shows the importance of beingable to define formal requirements as well as being able to test the design against theserequirements. One way to define requirements is in Metric Temporal Logic (MTL), whichallows for constraints that also have a time component. MTL can also have a distancemetric defined that allows for the calculation of how close the MTL constraint is to beingfalsified. This is termed robustness.Being able to calculate MTL robustness quickly can help reduce development time andcosts for a cyber physical system. In this thesis, improvements to the current method ofcomputing MTL robustness are proposed. These improvements lower the timecomplexity, allows parallel processing to be used, and lowers the memory foot print forMTL robustness calculation. These improvements will hopefully increase the likelihood ofMTL robustness being used in systems that were previously inaccessible do to timeconstraints, data resolution or real time systems that need results quickly. Theseimprovements will also open the possibility of using MTL in systems that operate for alarge amount of time and produce a large amount of signal data
40

Evaluating the Skillfulness of the Hurricane Analysis and Forecast System (HAFS) Forecasts for Tropical Cyclone Precipitation using an Object-Based Methodology

Stackhouse, Shakira Deshay 24 May 2022 (has links)
Tropical cyclones (TCs) are destructive, natural occurring phenomena that can cause the loss of lives, extensive structural damage, and negative economic impacts. A major hazard associated with these tropical systems is rainfall, which can result in flood conditions, contributing to the death and destruction. The role rainfall plays in the severity of the TC aftermath emphasizes the importance for models to produce reliable precipitation forecasts. Hurricane model precipitation forecasts can be improved through precipitation verification as the model weaknesses are identified. In this study, the Hurricane Analysis and Forecast System (HAFS), an experimental NOAA hurricane model, is evaluated for its skillfulness in forecasting TC precipitation. An object-based verification method is used as it is demonstrated to more accurately represent the model skill compared to traditional point-based verification methods. A 600 km search radius is implemented to capture the TC rainfall and the objects are defined by 2, 5, and 10 mm/hr rain rate thresholds. The 2 mm/hr threshold is chosen to predominantly represent stratiform precipitation, and the 5 and 10 mm/hr thresholds are used as approximate thresholds between stratiform and convective precipitation. Shape metrics such as area, closure, dispersion, and fragmentation, are calculated for the forecast and observed objects and compared using a Mann Whitney U test. The evaluation showed that model precipitation characteristics were consistent with storms that are too intense due to forecast precipitation being too central and enclosed around the TC center at the 2 mm/hr threshold, and too cohesive at the 10 mm/hr threshold. Changes in the model skill with lead time were also investigated. The model spin-up negatively impacted the model skill up to six hours at the 2 mm/hr threshold and up to three hours at the 5 mm/hr threshold, and the skill was not affected by the spin-up at the 10 mm/hr threshold. This indicates that the model took longer to realistically depict stratiform precipitation compared to convective precipitation. The model skill also worsened after 48 hours at the 2 and 10 mm/hr thresholds when the precipitation tended to be too cohesive. Future work will apply the object-based verification method to evaluate the TC precipitation forecasts of the Basin-Scale Hurricane Weather Research and Forecasting (HWRF-B) model. / Master of Science / Tropical cyclone (TC) precipitation can impose serious threats, such as flood conditions, which can result in death and severe damage. Due to these negative consequences associated with TC rainfall, it is important for affected populations to be sufficiently prepared once these TCs make landfall. Hurricane models play a large role in the preparations that are made as they predict the location and intensity of TC rainfall, which influences the peoples' choices in taking precautionary measures. Therefore, hurricane models need to be accurate, and comparing the forecast precipitation to the observed precipitation allows for areas in which the model performs poorly to be identified. Model developers can then be informed of the areas that need to be improved. In this study, the precipitation forecasts from the Hurricane Analysis and Forecast System (HAFS) model, a hurricane model that is currently under development, are evaluated. The shape and size of the forecast and observed precipitation are quantified for light, moderate, and heavy precipitation using metrics such as area, perimeter, and elongation. The values of these metrics for the forecast and observed precipitation are compared using a statistical test. The results show that the hurricane model tended to forecast storms that are too weak due to forecast precipitation being too close to the TC center, too wrapped around the TC center, and too connected. The hurricane model is also evaluated for the accuracy of its forecasts with time from model initialization. The model had a harder time representing lighter precipitation than heavier precipitation during the first 6 hours after initialization. A decrease in the accuracy of the model forecasts was also shown 48 hours after initialization due to the general degradation of model accuracy with time after initialization. Future work will evaluate the TC precipitation forecasts of another hurricane model, the Basin-Scale Hurricane Weather Research and Forecasting (HWRF-B) model.

Page generated in 0.1111 seconds