371 |
Methods and measures for statistical fault localisationLandsberg, David January 2016 (has links)
Fault localisation is the process of finding the causes of a given error, and is one of the most costly elements of software development. One of the most efficient approaches to fault localisation appeals to statistical methods. These methods are characterised by their ability to estimate how faulty a program artefact is as a function of statistical information about a given program and test suite. However, the major problem facing statistical approaches is their effectiveness -- particularly with respect to finding single (or multiple) faults in large programs typical to the real world. A solution to this problem hinges on discovering new formal properties of faulty programs and developing scalable statistical techniques which exploit them. In this thesis I address this by identifying new properties of faulty programs, developing the formal frameworks and methods which are formally proven to exploit them, and demonstrating that many of our new techniques substantially and statistically significantly outperform competing algorithms at given fault localisation tasks (using p = 0.01) on what (to our knowledge) is one of the largest scale set of experiments in fault localisation to date. This research is thus designed to corroborate the following thesis statement: That the new algorithms presented in this thesis are effective and efficient at software fault localisation and outperform state of the art statistical techniques at a range of fault localisation tasks. In more detail, the major thesis contributions are as follows: 1. We perform a thorough investigation into the existing framework of (sbfl), which currently stands at the cutting edge of statistical fault localisation. To improve on the effectiveness of sbfl, our first contribution is to introduce and motivate many new statistical measures which can be used within this framework. First, we show that many are well motivated to the task of sbfl. Second, we formally prove equivalence properties of large classes of measures. Third, we show that many of the measures perform competitively with the existing measures in experimentation -- in particular our new measure m9185 outperforms all existing measures on average in terms of effectiveness, and along with Kulkzynski2, is in a class of measures which statistically significantly outperforms all other measures at finding a single fault in a program (p = 0.01). 2. Having investigated sbfl, our second contribution is to motivate, introduce, and formally develop a new formal framework which we call probabilistic fault localisation (pfl). pfl is similar to sbfl insofar as it can leverage any suspiciousness measure, and is designed to directly estimate the probability that a given program artefact is faulty. First, we formally prove that pfl is theoretically superior to sbfl insofar as it satisfies and exploits a number of desirable formal properties which sbfl does not. Second, we experimentally show that pfl methods (namely, our measure pfl-ppv) substantially and statistically significantly outperforms the best performing sbfl measures at finding a fault in large multiple fault programs (p = 0.01). Furthermore, we show that for many of our benchmarks it is theoretically impossible to design strictly rational sbfl measures which outperform given pfl techniques. 3. Having addressed the problem of localising a single fault in a pro- gram, we address the problem of localising multiple faults. Accord- ingly, our third major contribution is the introduction and motiva- tion of a new algorithm M<sub>Opt(g)</sub> which optimises any ranking-based method g (such as pfl/sbfl/Barinel) to the task of multiple fault localisation. First we prove that MOpt(g) formally satisfies and exploits a newly identified formal property of multiple fault optimality. Secondly, we experimentally show that there are values for g such that M<sub>Opt(g)</sub> substantially and statistically significantly outperforms given ranking-based fault localisation methods at the task of finding multiple faults (p = 0.01). 4. Having developed methods for localising faults as a function of a given test suite, we finally address the problem of optimising test suites for the purposes of fault localisation. Accordingly, we first present an algorithm which leverages model checkers to improve a given test suite by making it satisfy a property of single bug opti- mality. Second, we experimentally show that on small benchmarks single bug optimal test suites can be generated (from scratch) efficiently when the algorithm is used in conjunction with the cbmc model checker, and that the test suite generated can be used effectively for fault localisation.
|
372 |
Hierarchické modelování plánovacích problémů / Hierarchical Modeling of Planning ProblemsDvořák, Tomáš January 2016 (has links)
Automated planning is a task to find a sequence of actions leading from an initial state to a desired goal state. There is a lot of formal models for planning problems modeling. The class of hierarchical formal models is one of them. In this thesis we will propose a hierarchical model, called GramPlan, based on attribute grammars. We will present some methods to reduce classical STRIPS formalism to GramPlan and we will prove the correctness of this reduction. Also we will present reduction of the FlowOpt workflow model to GramPlan and we will prove it's correctness. We will desing a couple of verification methods for attribute grammars and we will show these methods are equivalent. Powered by TCPDF (www.tcpdf.org)
|
373 |
Soothsharp: překladač C# do jazyka Viper / Soothsharp: A C#-to-Viper translatorHudeček, Petr January 2017 (has links)
Viper is a verification infrastructure developed at ETH Zurich. Using this infrastructure, programs written in the Viper language may be analyzed for correctness with respect to assertions and contracts. In this thesis, we develop a contracts library and a translator program that compiles C# code into the Viper language and thus allows it to be verified. A user may annotate their C# program with these contracts and then use the translator to determine its functional correctness. The translator supports most C# features, including types and arrays. It also integrates with Visual Studio, showing translation and verification errors to the user on-the-fly.
|
374 |
Verifying parentage and gender of domestic dog conceptuses using microsatellitesSteckler, Daniela 21 December 2010 (has links)
Parentage testing in the domestic dog is finding increasing application for dog breed registries as well as in research. The aim of parentage verification is the correct assignment of both parents to the offspring. For accurate parentage verification informative microsatellite markers have to be identified. More powerful models to study artificial insemination in bitches will be possible if the paternity and gender of early dog conceptuses can be determined. The amelogenin gene locus has been used in bovine day six to day seven embryos for early gender determination but no research has been done on early conceptuses of the domestic dog. The aim of the current study was to establish an accurate method for parentage and gender determination from domestic dog conceptuses during early pregnancy in a multi-sire insemination trial. Semen from 10 male dogs was used in each of 12 females for artificial insemination. Blood and uterine tissue for DNA extraction was collected from males and females, and embryonic material was collected after ovariohysterectomy between 16 and 30 days after the onset of cytological dioestrus. Twenty-three microsatellite markers were used for parentage verification, and the amelogenin gene locus for gender determination. Mean observed heterozygosity, mean expected heterozygosity (HExp), and mean PIC were high (0.6753, 0.6785, and 0.628, respectively). There were 66 conceptuses. In two, neither parentage nor gender could be established because their tissue samples were contaminated. Parentage could be assigned by CERVUS 3.0.3 in 42 out of 64 of the cases (66%) without difficulty. Another 33% of the cases (21 out of 64) could be resolved using the number of exclusions, LOD scores or manual verification of genotyping errors. In one conceptus, paternity could not be established because its sire may have been either of two siblings. The gender of the female and male dogs was successfully confirmed using the amelogenin gene locus. The gender of the conceptuses was determined using the amelogenin gene locus (50% male, 50% female) but not confirmed by another method of gender determination such as karyotyping. As shown in the current study, the panel of 24 microsatellite markers used provides high information content suitable for parentage verification in multi-sire litters, as well as gender determination of early conceptuses. / Dissertation (MSc)--University of Pretoria, 2010. / Production Animal Studies / unrestricted
|
375 |
Exploiting structure for scalable software verificationDomagoj, Babić 11 1900 (has links)
Software bugs are expensive. Recent estimates by the US National Institute of Standards and Technology claim that the cost of software bugs to the US economy alone is approximately 60 billion USD annually. As society becomes increasingly software-dependent, bugs also reduce our productivity and threaten our safety and security. Decreasing these direct and indirect costs represents a significant research challenge as well as an opportunity for businesses.
Automatic software bug-finding and verification tools have a potential to completely revolutionize the software engineering industry by improving reliability and decreasing development costs. Since software analysis is in general undecidable, automatic tools have to use various abstractions to make the analysis computationally tractable. Abstraction is a double-edged sword: coarse abstractions, in general, yield easier verification, but also less precise results.
This thesis focuses on exploiting the structure of software for abstracting away irrelevant behavior. Programmers tend to organize code into objects and functions, which effectively represent natural abstraction boundaries. Humans use such structural abstractions to simplify their mental models of software and for constructing informal explanations of why a piece of code should work. A natural question to ask is: How can automatic bug-finding tools exploit the same natural abstractions? This thesis offers possible answers.
More specifically, I present three novel ways to exploit structure at three different steps of the software analysis process. First, I show how symbolic execution can preserve the data-flow dependencies of the original code while constructing compact symbolic representations of programs. Second, I propose structural abstraction, which exploits the structure preserved by the symbolic execution. Structural abstraction solves a long-standing open problem --- scalable interprocedural path- and context-sensitive program analysis. Finally, I present an automatic tuning approach that exploits the fine-grained structural properties of software (namely, data- and control-dependency) for faster property checking. This novel approach resulted in a 500-fold speedup over the best previous techniques. Automatic tuning not only redefined the limits of automatic software analysis tools, but also has already found its way into other domains (like model checking), demonstrating the generality and applicability of this idea. / Science, Faculty of / Computer Science, Department of / Graduate
|
376 |
Formal Verification Of Analog And Mixed Signal Designs Using Simulation TracesLata, Kusum 01 1900 (has links) (PDF)
The conventional approach to validate the analog and mixed signal designs utilizes extensive SPICE-level simulations. The main challenge in this approach is to know when all important corner cases have been simulated. An alternate approach is to use the formal verification techniques. Formal verification techniques have gained wide spread popularity in the digital design domain; but in case of analog and mixed signal designs, a large number of test scenarios need to be designed to generate sufficient simulation traces to test out all the specified system behaviours. Analog and mixed signal designs can be formally modeled as hybrid systems and therefore techniques used for formal analysis and verification of hybrid systems can be applied to the analog and mixed signal designs.
Generally, formal verification tools for hybrid systems work at the abstract level where we model the systems in terms of differential equations or algebraic equations. However the analog and mixed signal system designers are very comfortable in designing the circuits at the transistor level. To bridge the gap between abstraction level verification and the designs validation which has been implemented at the transistor level, the very important issue we need to address is: Can we formally verify the circuits at the transistor level itself? For this we have proposed a framework for doing the formal verification of analog and mixed signal designs using SPICE simulation traces in one of the hybrid systems formal verification tools (i.e. Checkmate from CMU). An extension to a formal verification approach of hybrid systems is proposed to verify analog and mixed signal (AMS) designs. AMS designs can be formally modeled as hybrid systems and therefore lend themselves to the formal analysis and verification techniques applied to hybrid systems. The proposed approach employs simulation traces obtained from an actual design implementation of AMS circuit blocks (for example, in the form of SPICE netlists) to carry out formal analysis and verification. This enables the same platform used for formally validating an abstract model of an AMS design to be also used for validating its different refinements and design implementation, thereby providing a simple route to formal verification at different levels of implementation.
Our approach has been illustrated through the case studies using simulation traces form the different frameworks i.e. Simulink/Stateflow framework and the SPICE simulation traces. We demonstrate the feasibility of our approach around the Checkmate and the case studies for hybrid systems and the analog and mixed signal designs.
|
377 |
Examining Methods and Practices of Source Data Verification in Canadian Critical Care Randomized Controlled TrialsWard, Roxanne E. January 2013 (has links)
Statement of the Problem: Source data verification (SDV) is the process of comparing data collected at the source to data recorded on a Case Report Form, either paper or electronic (1) to ensure that the data are complete, accurate and verifiable. Good Clinical Practice (GCP) Guidelines are vague and lack evidence as to the degree of SDV and whether or not SDV affects study outcomes.
Methods of Investigation: We performed systematic reviews to establish the published evidence-base for methods of SDV and to examine the effect of SDV on study outcomes. We then conducted a national survey of Canadian Critical Care investigators and research coordinators regarding their attitudes and beliefs regarding SDV. We followed by an audit of the completed and in-progress Randomized Controlled Trials (RCTs) of the Canadian Critical Care Trials Group (CCCTG).
Results: Systematic Review of Methods of SDV: The most common reported or recommended frequency of source data verification (10/14 - 71%) was either based on level or risk, or that it be conducted early (i.e. after 1st patient enrolled). The amount of SDV recommended or reported, varied from 5-100%. Systematic Review of Impact of SDV on Study Outcomes: There was no difference in study outcomes for 1 trial and unable to assess in the other. National Survey of Critical Care Investigators and Research Coordinators: Data from the survey found that 95.8% (115/120) of respondents believed that SDV was an important part of Quality Assurance; 73.3% (88/120) felt that academic studies should do more SDV; and 62.5% (75/120) felt that there is insufficient funding available for SDV. Audit of Source Data Verification Practices in CCCTG RCTs: In the national audit of in-progress and completed CCCTG RCTs, 9/15 (60%) included a plan for SDV and 8/15 (53%) actually conducted SDV. Of the 9 completed published trials, 44% (4/9) conducted SDV.
Conclusion: There is little evidence base for methods and effect of SDV on study outcomes. Based on the results of the systematic review, survey, and audit, more research is needed to support the evidence base for the methods and effect of SDV on study outcomes.
|
378 |
BSP-Why, un outil pour la vérification déductive de programmes BSP : machine-checked semantics and application to distributed state-space algorithms / BSP-Why, a tool for deductive verification of BSP programs : sémantiques mécanisées et application aux algorithmes d'espace d'états distribuésFortin, Jean 14 October 2013 (has links)
Cette thèse s'inscrit dans le domaine de la vérification formelle de programmes parallèles. L'enjeu de la vérification formelle est de s'assurer qu'un programme va bien fonctionner comme il le devrait, sans commettre d'erreur, se bloquer, ou se terminer anormalement. Cela est d'autant plus important dans le domaine du calcul parallèle, où le coût des calculs est parfois très élevé. Le modèle BSP (Bulk Synchronous Parallelism) est un modèle de parallélisme bien adapté à l'utilisation des méthodes formelles. Il garantit une forme de structure dans le programme parallèle, en l'organisant en super-étapes où chacune d'entre elle est composées d'une phase de calculs, puis d'une phase de communications entre les unités de calculs. Dans cette thèse, nous avons choisi d'étendre un outil actuel pour l'adapter à la preuve de programmes BSP. Nous nous sommes basés sur Why, un VCG (générateur de condition de vérification) qui a l'avantage de pouvoir s'interfacer avec plusieurs prouveurs automatiques et assistants de preuve pour décharger les obligations de preuves. Les contributions de cette thèse sont multiples. Dans un premier temps, nous présentons une comparaison des différentes librairies BSP disponibles, afin de mettre en évidence les primitives de programmation BSP les plus utilisées, donc les plus intéressantes à formaliser. Nous présentons ensuite BSP-Why, notre outil de preuve des programmes BSP. Cet outil se repose sur une génération d'un programme séquentiel qui simule le programme parallèle entré permettant ainsi d'utiliser Why et les nombreux prouveurs automatiques associés pour prouver les obligations de preuves. Nous montrons ensuite comment BSP-Why peut-être utilisé pour prouver la correction de quelques algorithmes BSP simples, mais aussi pour un exemple plus complexe qu'est la construction distribuée de l'espace d'états (model-checking) de systèmes et plus particulièrement dans les protocoles de sécurité. Enfin, afin de garantir la plus grande confiance dans l'outil BSP-Why, nous formalisons les sémantiques du langage, dans l'assistant de preuve Coq. Nous démontrons également la correction de la transformation utilisée pour passer d'un programme parallèle à un programme séquentiel / This thesis takes part in the formal verification of parallel programs. The aim of formal verification is to ensure that a program will run as it should, without making mistakes, blocking, or terminating abnormally. This is even more important in the parallel computation field, where the cost of calculations can be very high. The BSP model (Bulk Synchronous Parallelism) is a model of parallelism well suited for the use of formal methods. It guarantees a structure in the parallel program, by organising it into super-steps, each of them consisting of a phase of computations, then communications between the processes. In this thesis, we chose to extend an existing tool to adapt it for the proof of BSP programs. We based ourselves on Why, a VCG (verification condition generator) that has the advantage of being able to interface with several automatic provers and proof assistants to discharge the proof obligations. There are multiple contributions in this thesis. In a first part, we present a comparison of the existing BSP libraries, in order to show the most used BSP primitives, which are the most interesting to formalise. We then present BSP-Why, our tool for the proof of BSP programs. This tools uses a generation of a sequential program to simulate the parallel program in input, thus allowing the use of Why and the numerous associated provers to prove the proof obligations. We then show how BSP-Why can be used to prove the correctness of some basic BSP algorithms, and also on a more complex example, the generation of the state-space (model-checking) of systems, especially for security protocols. Finally, in order to ensure the greatest confidence in the BSP-Why tool, we give a formalisation of the language semantics, in the Coq proof assistant. We also prove the correctness of the transformation used to go from a parallel program to a sequential program
|
379 |
Automata based monitoring and mining of execution tracesReger, Giles Matthew January 2014 (has links)
This thesis contributes work to the fields of runtime monitoring and specification mining. It develops a formalism for specifying patterns of behaviour in execution traces and defines techniques for checking these patterns in, and extracting patterns from, traces. These techniques represent an extension in the expressiveness of properties that can be efficiently and effectively monitored and mined. The behaviour of a computer system is considered in terms of the actions it performs, captured in execution traces. Patterns of behaviour, formally defined in trace specifications, denote the traces that the system should (or should not) exhibit. The main task this work considers is that of checking that the system conforms to the specification i.e. is correct. Additionally, trace specifications can be used to document behaviour to aid maintenance and development. However, formal specifications are often missing or incomplete, hence the mining activity. Previous work in the field of runtime monitoring (checking execution traces) has tended to either focus on efficiency or expressiveness, with different approaches making different trade-offs. This work considers both, achieving the expressiveness of the most expressive existing tools whilst remaining competitive with the most efficient. These elements of expressiveness and efficiency depend on the specification formalism used. Therefore, we introduce quantified event automata for describing patterns of behaviour in execution traces and then develop a range of efficient monitoring algorithms. To monitor execution traces we need a formal description of expected behaviour. However, these are often difficult to write - especially as there is often a lack of understanding of actual behaviour. The field of specification mining aims to explain the behaviour present in execution traces by extracting specifications that conform to those traces. Previous work in this area has primarily been limited to simple specifications that do not consider data. By leveraging the quantified event automata formalism, and its efficient trace checking procedures, we introduce a generate-and-check style mining framework capable of accurately extracting complex specifications. This thesis, therefore, makes separate significant contributions to the fields of runtime monitoring and specification mining. This work generalises and extends existing techniques in runtime monitoring, enabling future research to better understand the interaction between expressiveness and efficiency. This work combines and extends previous approaches to specification mining, increasing the expressiveness of specifications that can be mined.
|
380 |
Ověření účetní závěrky a výroční zprávy ve vybraném podniku / Verificaion of the financial statement and annual report in a selected companyMálek, Jan January 2012 (has links)
The aim of the diploma thesis is verification of the financial statement and annual report in a specifically selected company. The application part follows the theoretical characterization of the latest auditing procedures, legislative and other adjustements in the Czech republic. Firstly the thesis is focused on the theoretical solution concerning the audit of the financial statement and annual report. In the first chapter are characterised the most important terms relating to the auditing such as subject and objective of audit, users of accounting information or qualities and qualifications of auditor. The next part concludes legal and professional background of auditing in the Czech republic. Major part of the theoretical section consists of the latest attitude to the auditing procedures of verification of the financial statement and annual report. There is also a description of the auditing process according to a methodical manual. The last and the most crucial part of the thesis consists of a practical application of auditing procedures of verificaion of the financial statement and annual report in a selected company. A subchapter receiving the most attention is Realization of auditing procedures that is based on analysis of the particular part of assets, liabilities, costs and profits.
|
Page generated in 0.1242 seconds