Spelling suggestions: "subject:"cerification**"" "subject:"erification**""
211 |
Algorithmic Verification Techniques for Mobile CodeAktug, Irem January 2008 (has links)
Modern computing platforms strive to support mobile code without putting system security at stake. These platforms can be viewed as open systems, as the mobile code adds new components to the running system. Establishing that such platforms function correctly can be divided into two steps. First, it is shown that the system functions correctly regardless of the mobile components that join it, provided that they satisfy certain assumptions. These assumptions can, for instance, restrict the behavior of the component to ensure that the security policy of the platform is not violated. Second, the mobile component is checked to satisfy its assumptions, before it is allowed to join the system. This thesis presents algorithmic verification techniques to support this methodology. In the first two parts, we present techniques for the verification of open systems relative to the given component assumptions. In the third part, a technique for the quick certification of mobile code is presented for the case where a particular type of program rewriting is used as a means of enforcing the component assumptions.In the first part of this study, we present a framework for the verification of open systems based on explicit state space representation. We propose Extended Modal Transition Systems (EMTS) as a suitable structure for representing the state space of open systems when assumptions on components are written in the modal μ-calculus. EMTSs are based on the Modal Transition Systems (MTS) of Larsen and provide a formalism for graphical specification and facilitate a thorough understanding of the system by visualization. In interactive verification, this state space representation enables proof reuse and aids the user guiding the verification process. We present a construction of state space representations from process algebraic open system descriptions based on a maximal model construction for the modal μ-calculus. The construction is sound and complete for systems with a single unknown component and sound for those without dynamic process reation. We also suggest a tableau-based proof system for establishing temporal properties of open systems represented as EMTS. The proof system is sound in general and complete for prime formulae.The problem of open system correctness also arises in compositional verification, where the problem of showing a global property of a system is reduced to showing local properties of components. In the second part, we extend an existing compositional verification framework for Java bytecode programs. The framework employs control flow graphs with procedures to model component implementations and open systems for the purpose of checking control-flow properties. We generalize these models to capture exceptional and multi-threaded behavior. The resulting control flow graphs are specifically tailored to support the compositional verification principle; however, they are sufficiently intuitive and standard to be useful on their own. We describe how the models can be extracted from program code and give preliminary experimental results for our implementation of the extraction of control flow graphs with exceptions. We also discuss further tool support and practical applications of the method.In the third part of the thesis, we develop a technique for the certification of safe mobile code, by adapting the proof-carrying code scheme of Necula to the case of security policies expressed as security automata. In particular, we describe how proofs of policy compliance can be automatically generated for programs that include a monitor for the desired policy. A monitor is an entity that observes the execution of a program and terminates the program if a violation to the property is about to occur. One way to implement such a monitor is by rewriting the program to make it self-monitoring. Given a property, we characterize self-monitoring of Java bytecode programs for this property by an annotation scheme with annotations in the style of Floyd-Hoare logics. The annotations generated by this scheme can be extended in a straightforward way to form a correctness proof in the sense of axiomatic semantics of programs. The proof generated in this manner essentially establishes that the program satisfies the property because it contains a monitor for it. The annotations that comprise the proofs are simple and efficiently checkable, thus facilitate certification of mobile code on devices with restricted computing power such as mobile phones. / QC 20100628
|
212 |
Early Dual Grid Voltage Integrity VerificationAvci, Mehmet 14 December 2010 (has links)
As part of power distribution network verification, one should check if the voltage fluctuations exceed some critical threshold. The traditional simulation-based solution to this problem is intractable due to the large number of possible circuit behaviors. This approach also requires full knowledge of the details of the underlying circuitry, not allowing one to verify the power distribution network early in the design flow. In this work, we consider the power and ground grids together (i.e. dual grid) and formulate the problem of computing the worst-case voltage fluctuations of the dual grid under the framework of current constraints. Then, we present a solution technique in which tight lower and upper bounds on worst-case voltage fluctuations are computed via linear programs. Experimental results indicate that the proposed technique results in errors in the range of a few mV . We also present extensions to single grid (i.e. only power grid) verification techniques.
|
213 |
Early Dual Grid Voltage Integrity VerificationAvci, Mehmet 14 December 2010 (has links)
As part of power distribution network verification, one should check if the voltage fluctuations exceed some critical threshold. The traditional simulation-based solution to this problem is intractable due to the large number of possible circuit behaviors. This approach also requires full knowledge of the details of the underlying circuitry, not allowing one to verify the power distribution network early in the design flow. In this work, we consider the power and ground grids together (i.e. dual grid) and formulate the problem of computing the worst-case voltage fluctuations of the dual grid under the framework of current constraints. Then, we present a solution technique in which tight lower and upper bounds on worst-case voltage fluctuations are computed via linear programs. Experimental results indicate that the proposed technique results in errors in the range of a few mV . We also present extensions to single grid (i.e. only power grid) verification techniques.
|
214 |
Methods and Algorithms for Scalable Verification of Asynchronous DesignsYao, Haiqiong 01 January 2012 (has links)
Concurrent systems are getting more complex with the advent of multi-core processors and the support of concurrent programs. However, errors of concurrent systems are too subtle to detect with the traditional testing and simulation. Model checking is an effective method to verify concurrent systems by exhaustively searching the complete state space exhibited by a system. However, the main challenge for model checking is state explosion, that is the state space of a concurrent system grows exponentially in the number of components of the system. The state space explosion problem prevents model checking from being applied to systems in realistic size.
After decades of intensive research, a large number of methods have been developed to attack this well-known problem. Compositional verification is one of the promising methods that can be scalable to large complex concurrent systems. In compositional verification, the task of verifying an entire system is divided into smaller tasks of verifying each component of the system individually. The correctness of the properties on the entire system can be derived from the results from the local verification on individual components. This method avoids building up the global state space for the entire system, and accordingly alleviates the state space explosion problem. In order to facilitate the application of compositional verification, several issues need to be addressed. The generation of over-approximate and yet accurate environments for components for local verification is a major focus of the automated compositional verification.
This dissertation addresses such issue by proposing two abstraction refinement methods that refine the state space of each component with an over-approximate environment iteratively. The basic idea of these two abstraction refinement methods is to examine the interface interactions among different components and remove the behaviors that are not allowed on the components' interfaces from their corresponding state space. After the extra behaviors introduced by the over-approximate environment are removed by the abstraction refinement methods, the initial coarse environments become more accurate. The difference between these two methods lies in the identification and removal of illegal behaviors generated by the over-approximate environments.
For local properties that can be verified on individual components, compositional reasoning can be scaled to large systems by leveraging the proposed abstraction refinement methods. However, for global properties that cannot be checked locally, the state space of the whole system needs to be constructed. To alleviate the state explosion problem when generating the global state space by composing the local state space of the individual components, this dissertation also proposes several state space reduction techniques to simplify the state space of each component to help the compositional minimization method to generate a much smaller global state space for the entire system. These state space reduction techniques are sound and complete in that they keep all the behaviors on the interface but do not introduce any extra behaviors, therefore, the same verification results derived from the reduced global state space are also valid on the original state space for the entire system.
An automated compositional verification framework integrated with all the abstraction refinement methods and the state space reduction techniques presented in this dissertation has been implemented in an explicit model checker Platu. It has been applied to experiments on several non-trivial asynchronous circuit designs to demonstrate its scalability. The experimental results show that our automated compositional verification framework is effective on these examples that are too complex for the monolithic model checking methods to handle.
|
215 |
Automatic Extraction of Program Models for Formal Software Verificationde Carvalho Gomes, Pedro January 2015 (has links)
In this thesis we present a study of the generation of abstract program models from programs in real-world programming languages that are employed in the formal verification of software. The thesis is divided into three parts, which cover distinct types of software systems, programming languages, verification scenarios, program models and properties.The first part presents an algorithm for the extraction of control flow graphs from sequential Java bytecode programs. The graphs are tailored for a compositional technique for the verification of temporal control flow safety properties. We prove that the extracted models soundly over-approximate the program behaviour w.r.t. sequences of method invocations and exceptions. Therefore, the properties that are established with the compositional technique over the control flow graphs also hold for the programs. We implement the algorithm as ConFlEx, and evaluate the tool on a number of test cases.The second part presents a technique to generate program models from incomplete software systems, i.e., programs where the implementation of at least one of the components is not available. We first define a framework to represent incomplete Java bytecode programs, and extend the algorithm presented in the first part to handle missing code. Then, we introduce refinement rules, i.e., conditions for instantiating the missing code, and prove that the rules preserve properties established over control flow graphs extracted from incomplete programs. We have extended ConFlEx to support the new definitions, and re-evaluate the tool, now over test cases of incomplete programs.The third part addresses the verification of multithreaded programs. We present a technique to prove the following property of synchronization with condition variables: "If every thread synchronizing under the same condition variables eventually enters its synchronization block, then every thread will eventually exit the synchronization". To support the verification, we first propose SyncTask, a simple intermediate language for specifying synchronized parallel computations. Then, we propose an annotation language for Java programs to assist the automatic extraction of SyncTask programs, and show that, for correctly annotated programs, the above-mentioned property holds if and only if the corresponding SyncTask program terminates. We reduce the termination problem into a reachability problem on Coloured Petri Nets. We define an algorithm to extract nets from SyncTask programs, and show that a program terminates if and only if its corresponding net always reaches a particular set of dead configurations. The extraction of SyncTask programs and their translation into Petri nets is implemented as the STaVe tool. We evaluate the technique by feeding annotated Java programs to STaVe, then verifying the extracted nets with a standard Coloured Petri Net analysis tool / Den här avhandlingen studerar automatisk konstruktion av abstrakta modeller för formell verifikation av program skrivna i verkliga programmeringsspråk. Avhandlingen består av tre delar som involverar olika typer av program, programmeringsspråk, verifikationsscenarier, programmodeller och egenskaper.Del ett presenterar en algoritm för generation av flödesgrafer från sekventiella program i Java bytekod. Graferna är skräddarsydda för en kompositionell teknik för verifikationen av temporala kontrollflödens säkerhetsegenskaper. Vi visar att de extraherade modellerna sunt överapproximerar programbeteenden med avseende på sekvenser av metodanrop och -undantag. Således gäller egenskaperna som kan fastställas genom kompositionstekniken över kontrollflöden även för programmen. Vi implementerar dessutom algoritmen i form av verktyget ConFlEx och utvärderar verktyget på ett antal testfall.Del två presenterar en teknik för att generera modeller av ofullständiga program. Det vill säga, program där implementationen av åtminstone en komponent inte är tillgänglig. Vi definierar ett ramverk för att representera ofullständiga Java bytekodsprogram och utökar algoritmen från del ett till att hantera ofullständig kod. Därefter presenterar vi raffineringsregler - villkor för att instansiera den saknade koden - och bevisar att reglerna bevarar relevanta egenskaper av kontrollflödesgrafer. Vi har dessutom utökat ConFlEx till att stödja de nya definitionerna och har omvärderat verktyget på testfall av ofullständiga program.Del tre angriper verifikation av multitrådade program. Vi presenterar en teknik för att bevisa följande egenskap för synkronisering med vilkorsvariabler: "Om varje trådsynkronisering under samma villkor så småningom stiger in i sitt synkroniseringsblock så kommer varje tråd också till slut lämna synkroniseringen". För att stödja verifikationen så introducerar vi först SyncTask - ett enkelt mellanliggande språk för att specificera synkronisering av parallella beräkningar. Därefter presenterar vi ett annoteringsspråk för Java som tillåter automatisk extrahering av SyncTask-program och visar att egenskapen gäller om och endast om motsvarande SyncTask-program terminerar. Vi reducerar termineringsproblemet till ett nåbarhetsproblem på färgade Petrinät samt definierar en algoritm som skapar Petrinät från SyncTask-program där programmet terminerar om och endast om nätet alltid når en särskild mängd av döda konfigurationer. Extraktionen av SyncTask-program och deras motsvarande Petrinät är implementerade i form av verktyget STaVe. Slutligen utvärderar vi verktyget genom att mata annoterade. / <p>QC 20151101</p>
|
216 |
A verified framework for symbolic execution in the ACL2 theorem proverSwords, Sol Otis 11 February 2011 (has links)
Mechanized theorem proving is a promising means of formally
establishing facts about complex systems. However, in applying
theorem proving methodologies to industrial-scale hardware and
software systems, a large amount of user interaction is required in
order to prove useful properties. In practice, the human user tasked
with such a verification must gain a deep understanding of the system
to be verified, and prove numerous lemmas in order to allow the
theorem proving program to approach a proof of the desired fact.
Furthermore, proofs that fail during this process are a source of
confusion: the proof may either fail because the conjecture was false,
or because the prover required more help from the user in order to
reach the desired conclusion.
We have implemented a symbolic execution framework inside the ACL2
theorem prover in order to help address these issues on certain
problem domains. Our framework introduces a proof strategy that
applies bit-level symbolic execution using BDDs to finite-domain
problems. This proof strategy is a fully verified decision procedure
for such problems, and on many useful problem domains its capacity
vastly exceeds that of exhaustive testing. Our framework also
produces counterexamples for conjectures that it determines to be
false.
Our framework seeks to reduce the amount of necessary user interaction
in proving theorems about industrial-scale hardware and software
systems. By increasing the automation available in the prover, we
allow the user to complete useful proofs while understanding less of
the detailed implementation of the system. Furthermore, by producing
counterexamples for falsified conjectures, our framework reduces the
time spent by the user in trying to determine why a proof failed. / text
|
217 |
Besimokančiojo mokymo programos verifikavimo strategijos ir jų pritaikymas virtualioje mokymo(si) aplinkoje / Learner's curriculum verification stratiegies and adaptation in virtual learning environmentIdzelytė, Dominyka 16 August 2007 (has links)
Adaptyvios virtualios mokymo(si) aplinkos (AVMA) kontekste analizuojama mokymo(si) programa bei jo verifikacijos strategijų pritaikymas. Mokymo(si) programa - terminas, apibūdinantis kiekvieno registruoto vartotojo galimybę modifikuotoje VMA pasirinkti studijuotinas temas, mokymo medžiagos pateikimo pavidalą, pasirinkti pageidaujamą mokymosi lygmenį. Verifikacija – procesas, kuris yra vykdomas modifikuotoje VMA susikurtame mokymosi plane, siekiant nustatyti, ar ką tik sudarytas nustatymų rinkinys atitinka realius besimokančiojo poreikius. Tam patikrinti naudojamos įvairios strategijos, kurios ne visada būna optimalios, o dažnai ir stabdančios mokymosi procesą. Taikant mokslinėje literatūroje apra��ytas strategijas arba ieškant naujų, siekiama surasti tinkamiausią verifikacijos strategiją ir pritaikant ją VMA. / This paper in a condensed form presents a model of the component for a virtual learning environment (VLE), which enriches it with some adaptive features. Such features are curriculum and verification strategies. First one is understood as the set of courses and their content, which learner can choose according his needs. Second one understanding as process, which execute in modifies VLE and check knowledge of learner’s of course material using tests and after it update curriculum that satisfies the needs of the learner. Verification’s strategies used often are not optimal and delays study process. The aim of this work is definition of verification strategy, which is applied for correction of prebuild learner’s curriculum, and implementation of it into VLE.
|
218 |
Dosimetric verification of radiation therapy including intensity modulated treatments, using an amorphous-silicon electronic portal imaging deviceChytyk-Praznik, Krista January 2009 (has links)
Radiation therapy is continuously increasing in complexity due to technological innovation in delivery techniques, necessitating thorough dosimetric verification. Comparing accurately predicted portal dose images to measured images obtained during patient treatment can determine if a particular treatment was delivered correctly. The goal of this thesis was to create a method to predict portal dose images that was versatile and accurate enough to use in a clinical setting. All measured images in this work were obtained with an amorphous silicon electronic portal imaging device (a-Si EPID), but the technique is applicable to any planar imager. A detailed, physics-motivated fluence model was developed to characterize fluence exiting the linear accelerator head. The model was further refined using results from Monte Carlo simulations and schematics of the linear accelerator. The fluence incident on the EPID was converted to a portal dose image through a superposition of Monte Carlo-generated, monoenergetic dose kernels specific to the a-Si EPID. Predictions of clinical IMRT fields with no patient present agreed with measured portal dose images within 3% and 3 mm. The dose kernels were applied ignoring the geometrically divergent nature of incident fluence on the EPID. A computational investigation into this parallel dose kernel assumption determined its validity under clinically relevant situations. Introducing a patient or phantom into the beam required the portal image prediction algorithm to account for patient scatter and attenuation. Primary fluence was calculated by attenuating raylines cast through the patient CT dataset, while scatter fluence was determined through the superposition of pre-calculated scatter fluence kernels. Total dose in the EPID was calculated by convolving the total predicted incident fluence with the EPID-specific dose kernels. The algorithm was tested on water slabs with square fields, agreeing with measurement within 3% and 3 mm. The method was then applied to five prostate and six head-and-neck IMRT treatment courses (~1900 clinical images). Deviations between the predicted and measured images were quantified. The portal dose image prediction model developed in this thesis work has been shown to be accurate, and it was demonstrated to be able to verify patients’ delivered radiation treatments.
|
219 |
Closing the building energy performance gap by improving our predictionsSun, Yuming 27 August 2014 (has links)
Increasing studies imply that predicted energy performance of buildings significantly deviates from actual measured energy use. This so-called "performance gap" may undermine one's confidence in energy-efficient buildings, and thereby the role of building energy efficiency in the national carbon reduction plan. Closing the performance gap becomes a daunting challenge for the involved professions, stimulating them to reflect on how to investigate and better understand the size, origins, and extent of the gap. The energy performance gap underlines the lack of prediction capability of current building energy models. Specifically, existing predictions are predominantly deterministic, providing point estimation over the future quantity or event of interest. It, thus, largely ignores the error and noise inherent in an uncertain future of building energy consumption. To overcome this, the thesis turns to a thriving area in engineering statistics that focuses on computation-based uncertainty quantification. The work provides theories and models that enable probabilistic prediction over future energy consumption, forming the basis of risk assessment in decision-making. Uncertainties that affect the wide variety of interacting systems in buildings are organized into five scales (meteorology - urban - building - systems - occupants). At each level both model form and input parameter uncertainty are characterized with probability, involving statistical modeling and parameter distributional analysis. The quantification of uncertainty at different system scales is accomplished using the network of collaborators established through an NSF-funded research project. The bottom-up uncertainty quantification approach, which deals with meta uncertainty, is fundamental for generic application of uncertainty analysis across different types of buildings, under different urban climate conditions, and in different usage scenarios. Probabilistic predictions are evaluated by two criteria: coverage and sharpness. The goal of probabilistic prediction is to maximize the sharpness of the predictive distributions subject to the coverage of the realized values. The method is evaluated on a set of buildings on the Georgia Tech campus. The energy consumption of each building is monitored in most cases by a collection of hourly sub-metered consumption data. This research shows that a good match of probabilistic predictions and the real building energy consumption in operation is achievable. Results from the six case buildings show that using the best point estimations of the probabilistic predictions reduces the mean absolute error (MAE) from 44% to 15% and the root mean squared error (RMSE) from 49% to 18% in total annual cooling energy consumption. As for monthly cooling energy consumption, the MAE decreases from 44% to 21% and the RMSE decreases from 53% to 28%. More importantly, the entire probability distributions are statistically verified at annual level of building energy predictions. Based on uncertainty and sensitivity analysis applied to these buildings, the thesis concludes that the proposed method significantly reduces the magnitude and effectively infers the origins of the building energy performance gap.
|
220 |
Dosimetric verification of radiation therapy including intensity modulated treatments, using an amorphous-silicon electronic portal imaging deviceChytyk-Praznik, Krista January 2009 (has links)
Radiation therapy is continuously increasing in complexity due to technological innovation in delivery techniques, necessitating thorough dosimetric verification. Comparing accurately predicted portal dose images to measured images obtained during patient treatment can determine if a particular treatment was delivered correctly. The goal of this thesis was to create a method to predict portal dose images that was versatile and accurate enough to use in a clinical setting. All measured images in this work were obtained with an amorphous silicon electronic portal imaging device (a-Si EPID), but the technique is applicable to any planar imager. A detailed, physics-motivated fluence model was developed to characterize fluence exiting the linear accelerator head. The model was further refined using results from Monte Carlo simulations and schematics of the linear accelerator. The fluence incident on the EPID was converted to a portal dose image through a superposition of Monte Carlo-generated, monoenergetic dose kernels specific to the a-Si EPID. Predictions of clinical IMRT fields with no patient present agreed with measured portal dose images within 3% and 3 mm. The dose kernels were applied ignoring the geometrically divergent nature of incident fluence on the EPID. A computational investigation into this parallel dose kernel assumption determined its validity under clinically relevant situations. Introducing a patient or phantom into the beam required the portal image prediction algorithm to account for patient scatter and attenuation. Primary fluence was calculated by attenuating raylines cast through the patient CT dataset, while scatter fluence was determined through the superposition of pre-calculated scatter fluence kernels. Total dose in the EPID was calculated by convolving the total predicted incident fluence with the EPID-specific dose kernels. The algorithm was tested on water slabs with square fields, agreeing with measurement within 3% and 3 mm. The method was then applied to five prostate and six head-and-neck IMRT treatment courses (~1900 clinical images). Deviations between the predicted and measured images were quantified. The portal dose image prediction model developed in this thesis work has been shown to be accurate, and it was demonstrated to be able to verify patients’ delivered radiation treatments.
|
Page generated in 0.1211 seconds