• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • 1
  • Tagged with
  • 7
  • 7
  • 7
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On an implementation of abstract interpretation

Westcott, Doug January 1988 (has links)
This thesis describes an implementation of abstract interpretation and its application to strictness analysis and termination analysis. The abstract interpretation is performed based on a lattice-theoretical model of abstraction, or translation, of functions expressed in a lambda-calculus notation and defined over a concrete domain into functions defined over a user-specified, application-dependent, abstract domain. The functions thus obtained are then analyzed in order to find their least fixed-points in the lattice which is the abstract domain, using a method which is a simplification of the frontiers algorithm of Chris Clack and Simon Peyton Jones. In order to achieve the required efficiency, this method is implemented using lattice annotation, along with constraints upon the annotations. The implementation is then applied to the problems of strictness analysis and termination analysis, deriving useful pre-compilation information for many functions. The concrete domains over which the functions are defined may or may not include lists. / Science, Faculty of / Computer Science, Department of / Graduate
2

A configuration item and baseline identification system for software configuration management

Wilson, William H January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
3

A kernel to support computer-aided verification of embedded software

Grobler, Leon D 03 1900 (has links)
Thesis (MSc (Mathematical Sciences)--University of Stellenbosch, 2006. / Formal methods, such as model checking, have the potential to improve the reliablility of software. Abstract models of systems are subjected to formal analysis, often showing subtle defects not discovered by traditional testing.
4

The application of structure and code metrics to large scale systems

Canning, James Thomas January 1985 (has links)
This work extends the area of research termed software metrics by applying measures of system structure and measures of system code to three realistic software products. Previous research in this area has typically been limited to the application of code metrics such as: lines of code, McCabe's Cyclomatic number, and Halstead's software science variables. However, this research also investigates the relationship of four structure metrics: Henry's Information Flow measure, Woodfield's Syntactic Interconnection Model, Yau and Collofello's Stability measure and McClure's Invocation complexity, to various observed measures of complexity such as, ERRORS, CHANGES and CODING TIME. These metrics are referred to as structure measures since they measure control flow and data flow interfaces between system components. Spearman correlations between the metrics revealed that the code metrics were similar measures of system complexity, while the structure metrics were typically measuring different dimensions of software. Furthermore, correlating the metrics to observed measures of complexity indicated that the Information Flow metric and the Invocation Measure typically performed as well as the three code metrics when project factors and subsystem factors were taken into consideration. However, it was generally true that no single metric was able to satisfactorily identify the variations in the data for a single observed measure of complexity. Trends between many of the metrics and the observed data were identified when individual components were grouped together. Code metrics typically formed groups of increasing complexity which corresponded to increases in the mean values of the observed data. The strength of the Information Flow metric and the Invocation measure is their ability to form a group containing highly complex components which was found to be populated by outliers in the observed data. / Ph. D.
5

A comparison of two different model checking techniques

Bull, J. J. D 12 1900 (has links)
Thesis (MSc)--University of Stellenbosch, 2003. / ENGLISH ABSTRACT: Model checking is a computer-aided verification technique that is used to verify properties about the formal description of a system automatically. This technique has been applied successfully to detect subtle errors in reactive systems. Such errors are extremely difficult to detect by using traditional testing techniques. The conventional method of applying model checking is to construct a model manually either before or after the implementation of a system. Constructing such a model requires time, skill and experience. An alternative method is to derive a model from an implementation automatically. In this thesis two techniques of applying model checking to reactive systems are compared, both of which have problems as well as advantages. Two specific strategies are compared in the area of protocol development: 1. Structuring a protocol as a transition system, modelling the system, and then deriving an implementation from the model. 2. Automatically translating implementation code to a verifiable model. Structuring a reactive system as a transition system makes it possible to verify the control flow of the system at implementation level-as opposed to verifying the control flow at abstract level. The result is a closer correspondence between implementation and specification (model). At the same time testing, which is restricted to small, independent code fragments that manipulate data, is simplified significantly. The construction of a model often takes too long; therefore, verification results may no longer be applicable when they become available. To address this problem, the technique of automated model extraction was suggested. This technique aims to reduce the time required to construct a model by minimising manual input during model construction. A transition system is a low-level formalism and direct execution through interpretation is feasible. However, the overhead of interpretation is the major disadvantage of this technique. With automated model extraction there are disadvantages too. For example, differences between the implementation and specification languages-such as constructs present in the implementation language that cannot be expressed in the modelling language-make the development of an automated model extraction tool extremely difficult. In conclusion, the two techniques are compared against a set of software development considerations. Since a specific technique is not always preferable, guidelines are proposed to help select the best approach in different circumstances. / AFRIKAANSE OPSOMMING: Modeltoetsing is 'n rekenaargebaseerde verifikasietegniek wat gebruik word om eienskappe rakende 'n formele spesifikasie van 'n stelsel te verifieer. Die tegniek is al suksesvol toegepas om subtiele foute in reaktiewe stelsels op te spoor. Sulke foute word uiters moeilik opgespoor as tradisionele toetsings tegnieke gebruik word. Tradisioneel word modeltoetsing toegepas deur 'n model te bou voor of na die implementasie van 'n stelsel. Om'n model te bou verg tyd, vernuf en ervaring. 'n Alternatiewe metode is om outomaties 'n model van 'n implementasie af te lei. In hierdie tesis word twee toepassingstegnieke van modeltoetsing vergelyk, waar beide tegnieke beskik oor voordele sowel as nadele. Twee strategieë word vergelyk in die gebied van protokol ontwikkeling: 1. Om 'n protokol as 'n oorgangsstelsel te struktureer, dit te moduleer en dan 'n implementasie van die model af te lei. 2. Om outomaties 'n verifieerbare model van 'n implementasie af te lei. Om 'n reaktiewe stelsel as 'n oorgangsstelsel te struktureer maak dit moontlik om die kontrolevloei op implementasie vlak te verifieer-in teenstelling met verifikasie van kontrolevloei op 'n abstrakte vlak. Die resultaat is 'n nouer band wat bestaan tussen die implementasie en die spesifikasie. Terselfdetyd word toetsing, wat beperk word tot klein, onafhanklike kodesegmente wat data manupileer, beduidend vereenvoudig. Die konstruksie van 'n model neem soms te lank; gevolglik, wanneer die verifikasieresultate beskikbaar word, is dit dalk nie meer toepaslik op die huidige weergawe van 'n implementasie nie. Om die probleem aan te spreek is 'n tegniek om modelle outomaties van implementasies af te lei, voorgestel. Die doel van die tegniek is om die tyd wat dit neem om 'n model te bou te verminder deur handtoevoer tot 'n minimum te beperk. 'n Oorgangsstelsel is 'n laevlak formalisme en direkte uitvoering deur interpretasie is wesenlik. Die oorhoofse koste van die interpreteerder is egter die grootste nadeel van die tegniek. Daar is ook nadele wat oorweeg moet word rakende die tegniek om outomaties modelle van implementasies af te lei. Byvoorbeeld, verskille tussen die implementasietaal en spesifikasietaal=-soos byvoorbleed konstrukte wat in die implementasietaal gebruik word wat nie in die modeleringstaal voorgestel kan word nie-vrnaak die ontwikkeling van 'n modelafieier uiters moeilik. As gevolg word die twee tegnieke vergelyk teen 'n stel van programatuurontwikkelingsoorwegings. Omdat 'n spesifieke tegniek nie altyd voorkeur kan geniet nie, word riglyne voorgestel om te help met die keuse om die beste tegniek te kies in verskillende omstandighede.
6

Quantified PIRT and uncertainty quantification for computer code validation

Luo, Hu 05 December 2013 (has links)
This study is intended to investigate and propose a systematic method for uncertainty quantification for the computer code validation application. Uncertainty quantification has gained more and more attentions in recent years. U.S. Nuclear Regulatory Commission (NRC) requires the use of realistic best estimate (BE) computer code to follow the rigorous Code Scaling, Application and Uncertainty (CSAU) methodology. In CSAU, the Phenomena Identification and Ranking Table (PIRT) was developed to identify important code uncertainty contributors. To support and examine the traditional PIRT with quantified judgments, this study proposes a novel approach, the Quantified PIRT (QPIRT), to identify important code models and parameters for uncertainty quantification. Dimensionless analysis to code field equations to generate dimensionless groups (�� groups) using code simulation results serves as the foundation for QPIRT. Uncertainty quantification using DAKOTA code is proposed in this study based on the sampling approach. Nonparametric statistical theory identifies the fixed number of code run to assure the 95 percent probability and 95 percent confidence in the code uncertainty intervals. / Graduation date: 2013 / Access restricted to the OSU Community, at author's request, from Dec. 5, 2012 - Dec. 5, 2013
7

Secure Digital Provenance: Challenges and a New Design

Rangwala, Mohammed M. January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Derived from the field of art curation, digital provenance is an unforgeable record of a digital object's chain of successive custody and sequence of operations performed on the object. It plays an important role in accessing the trustworthiness of the object, verifying its reliability and conducting audit trails of its lineage. Digital provenance forms an immutable directed acyclic graph (DAG) structure. Since history of an object cannot be changed, once a provenance chain has been created it must be protected in order to guarantee its reliability. Provenance can face attacks against the integrity of records and the confidentiality of user information, making security an important trait required for digital provenance. The digital object and its associated provenance can have different security requirements, and this makes the security of provenance different from that of traditional data. Research on digital provenance has primarily focused on provenance generation, storage and management frameworks in different fields. Security of digital provenance has also gained attention in recent years, particularly as more and more data is migrated in cloud environments which are distributed and are not under the complete control of data owners. However, there still lacks a viable secure digital provenance scheme which can provide comprehensive security for digital provenance, particularly for generic and dynamic ones. In this work, we address two important aspects of secure digital provenance that have not been investigated thoroughly in existing works: 1) capturing the DAG structure of provenance and 2) supporting dynamic information sharing. We propose a scheme that uses signature-based mutual agreements between successive users to clearly delineate the transition of responsibility of the digital object as it is passed along the chain of users. In addition to preserving the properties of confidentiality, immutability and availability for a digital provenance chain, it supports the representation of DAG structures of provenance. Our scheme supports dynamic information sharing scenarios where the sequence of users who have custody of the document is not predetermined. Security analysis and empirical results indicate that our scheme improves the security of the typical secure provenance schemes with comparable performance.

Page generated in 0.1832 seconds