• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Theoretical Study of the Synergy and Lazy Annotation Algorithms

Jayaram, Sampath January 2013 (has links) (PDF)
Given a program with assertions, the assertion checking problem is to tell whether there is an execution of the program that violates one of the assertions. One approach to this problem is to explore different paths towards assertion violations, and to learn “blocking” conditions whenever a path is blocked from reaching the violations. The Synergy algorithm of Gulavani et al. [FSE 2006] and the Lazy Annotation algorithm of McMillan [CAV2010] are two recent algorithms that follow this approach to assertion checking. Each technique has its own advantages. Synergy uses concrete tests which are very cheap as compared to theorem prover calls. The tests also help by giving us the place to perform the refinement (called the frontier) for an abstraction which is too coarse. Synergy uses partition refinement while maintaining its abstraction. The Lazy Annotation algorithm basically partitions each location in to regions that are safe and unsafe. The safe regions are those from which we cannot reach the error states, and the unsafe regions are the remaining ones. The annotations that this algorithm maintains correspond to the safe regions. The advantage that annotations have over partition refinement is that annotations can recover from irrelevant predicates used for annotating, where as once a partition is refined with an irrelevant predicate, it cannot recover from it. In this work, we make a theoretical study of the algorithms mentioned above. The aim of the study is to answer questions like: Is one algorithm provably better than the other, in terms of the best-case execution (counting the number of refinement steps) on input programs? Is the termination behavior of one always better than the other? We show that the Synergy and Lazy Annotation algorithms are incomparable, i.e., neither of them is provably better than the other, in terms of their best-case execution times. We also show how we can view the two algorithms on a common ground, in the sense that we show how to translate a snapshot of one algorithm into a snapshot of the other. This allows us to import the heuristics of one algorithm into the other, and there by propose new and potentially improved versions of these algorithms. By viewing them o n a common ground, we are also able to view the final proofs generated by the algorithms in either representation. We go on to study the proposed new versions of the Synergy and Lazy Annotation, comparing their best-case running times and their termination behaviour. We show that the following pairs of algorithms are incomparable: Mod-Syn (Lazy Annotation-style refinement imported into Synergy) and Synergy, Mod-Syn and Lazy Annotation, Synergy and SEAL(Synergy heuristics imported into Lazy Annotation). We show that the SEAL algorithm always performs better than the Lazy Annotation algorithm.
2

The use of memory state knowledge to improve computer memory system organization

Isen, Ciji 01 June 2011 (has links)
The trends in virtualization as well as multi-core, multiprocessor environments have translated to a massive increase in the amount of main memory each individual system needs to be fitted with, so as to effectively utilize this growing compute capacity. The increasing demand on main memory implies that the main memory devices and their issues are as important a part of system design as the central processors. The primary issues of modern memory are power, energy, and scaling of capacity. Nearly a third of the system power and energy can be from the memory subsystem. At the same time, modern main memory devices are limited by technology in their future ability to scale and keep pace with the modern program demands thereby requiring exploration of alternatives to main memory storage technology. This dissertation exploits dynamic knowledge of memory state and memory data value to improve memory performance and reduce memory energy consumption. A cross-boundary approach to communicate information about dynamic memory management state (allocated and deallocated memory) between software and hardware viii memory subsystem through a combination of ISA support and hardware structures is proposed in this research. These mechanisms help identify memory operations to regions of memory that have no impact on the correct execution of the program because they were either freshly allocated or deallocated. This inference about the impact stems from the fact that, data in memory regions that have been deallocated are no longer useful to the actual program code and data present in freshly allocated memory is also not useful to the program because the dynamic memory has not been defined by the program. By being cognizant of this, such memory operations are avoided thereby saving energy and improving the usefulness of the main memory. Furthermore, when stores write zeros to memory, the number of stores to the memory is reduced in this research by capturing it as compressed information which is stored along with memory management state information. Using the methods outlined above, this dissertation harnesses memory management state and data value information to achieve significant savings in energy consumption while extending the endurance limit of memory technologies. / text
3

Logic-based techniques for program analysis and specification synthesis

Feliú Gabaldón, Marco Antonio 19 November 2013 (has links)
La Tesis investiga técnicas ágiles dentro del paradigma declarativo para dar solución a dos problemas: el análisis de programas y la inferencia de especificaciones a partir de programas escritos en lenguajes multiparadigma y en lenguajes imperativos con tipos, objetos, estructuras y punteros. Respecto al estado actual de la tesis, la parte de análisis de programas ya está consolidada, mientras que la parte de inferencia de especificaciones sigue en fase de desarrollo activo. La primera parte da soluciones para la ejecución de análisis de punteros especificados en Datalog. En esta parte se han desarrollado dos técnicas de ejecución de especificaciones en dicho lenguaje Datalog: una de ellas utiliza resolutores de sistemas de ecuaciones booleanas, y la otra utiliza la lógica de reescritura implementada eficientemente en el lenguaje Maude. La segunda parte desarrolla técnicas de inferencia de especificaciones a partir de programas. En esta parte se han desarrollado dos métodos de inferencia de especificaciones. El primer método se desarrolló para el lenguaje lógico-funcional Curry y permite inferir especificaciones ecuacionales mediante interpretación abstracta de los programas. El segundo método está siendo desarrollado para lenguajes imperativos realistas, y se ha aplicado a un subconjunto del lenguaje de programación C. Este método permite inferir especificaciones en forma de reglas que representan las distintas relaciones entre las propiedades que el estado de un programa satisface antes y después de su ejecución. Además, estas propiedades son expresables en términos de las abstracciones funcionales del propio programa, resultando en una especificación de muy alto nivel y, por lo tanto, de más fácil comprensión. / Feliú Gabaldón, MA. (2013). Logic-based techniques for program analysis and specification synthesis [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/33747 / TESIS
4

Algebras of Relations : from algorithms to formal proofs / Algèbres de relations : des algorithmes aux preuves formelles

Brunet, Paul 04 October 2016 (has links)
Les algèbres de relations apparaissent naturellement dans de nombreux cadres, en informatique comme en mathématiques. Elles constituent en particulier un formalisme tout à fait adapté à la sémantique des programmes impératifs. Les algèbres de Kleene constituent un point de départ : ces algèbres jouissent de résultats de décidabilités très satisfaisants, et admettent une axiomatisation complète. L'objectif de cette thèse a été d'étendre les résultats connus sur les algèbres de Kleene à des extensions de celles-ci.Nous nous sommes tout d'abord intéressés à une extension connue : les algèbres de Kleene avec converse. La décidabilité de ces algèbres était déjà connue, mais l'algorithme prouvant ce résultat était trop compliqué pour être utilisé en pratique. Nous avons donné un algorithme plus simple, plus efficace, et dont la correction est plus facile à établir. Ceci nous a permis de placer ce problème dans la classe de complexité PSpace-complete.Nous avons ensuite étudié les allégories de Kleene. Sur cette extension, peu de résultats étaient connus. En suivant des résultats sur des algèbres proches, nous avons établi l'équivalence du problème d'égalité dans les allégories de Kleene à l'égalité de certains ensembles de graphes. Nous avons ensuite développé un modèle d'automate original (les automates de Petri), basé sur les réseaux de Petri, et avons établi l'équivalence de notre problème original avec le problème de comparaison de ces automates. Nous avons enfin développé un algorithme pour effectuer cette comparaison dans le cadre restreint des treillis de Kleene sans identité. Cet algorithme utilise un espace exponentiel. Néanmoins, nous avons pu établir que la comparaison d'automates de Petri dans ce cas est ExpSpace-complète. Enfin, nous nous sommes intéressés aux algèbres de Kleene Nominales. Nous avons réalisé que les descriptions existantes de ces algèbres n'étaient pas adaptées à la sémantique relationnelle des programmes. Nous les avons donc modifiées pour nos besoins, et ce faisant avons trouvé diverses variations naturelles de ce modèle. Nous avons donc étudié en détails et en Coq les ponts que l'on peut établir entre ces variantes, et entre le modèle “classique” et notre nouvelle version / Algebras of relations appear naturally in many contexts, in computer science as well as in mathematics. They constitute a framework well suited to the semantics of imperative programs. Kleene algebra are a starting point: these algebras enjoy very strong decidability properties, and a complete axiomatisation. The goal of this thesis was to export known results from Kleene algebra to some of its extensions. We first considered a known extension: Kleene algebras with converse. Decidability of these algebras was already known, but the algorithm witnessing this result was too complicated to be practical. We proposed a simpler algorithm, more efficient, and whose correctness is easier to establish. It allowed us to prove that this problem lies in the complexity class PSpace-complete.Then we studied Kleene allegories. Few results were known about this extension. Following results about closely related algebras, we established the equivalence between equality in Kleene allegories and equality of certain sets of graphs. We then developed an original automaton model (so-called Petri automata), based on Petri nets. We proved the equivalence between the original problem and comparing these automata. In the restricted setting of identity-free Kleene lattices, we also provided an algorithm performing this comparison. This algorithm uses exponential space. However, we proved that the problem of comparing Petri automata lies in the class ExpSpace-complete.Finally, we studied Nominal Kleene algebras. We realised that existing descriptions of these algebra were not suited to relational semantics of programming languages. We thus modified them accordingly, and doing so uncovered several natural variations of this model. We then studied formally the bridges one could build between these variations, and between the existing model and our new version of it. This study was conducted using the proof assistant Coq
5

Computational Issues in Calculi of Partial Inductive Definitions

Kreuger, Per January 1995 (has links)
We study the properties of a number of algorithms proposed to explore the computational space generated by a very simple and general idea: the notion of a mathematical definition and a number of suggested formal interpretations ofthis idea. Theories of partial inductive definitions (PID) constitute a class of logics based on the notion of an inductive definition. Formal systems based on this notion can be used to generalize Horn-logic and naturally allow and suggest extensions which differ in interesting ways from generalizations based on first order predicate calculus. E.g. the notion of completion generated by a calculus of PID and the resulting notion of negation is completely natural and does not require externally motivated procedures such as "negation as failure". For this reason, computational issues arising in these calculi deserve closer inspection. This work discuss a number of finitary theories of PID and analyzethe algorithmic and semantical issues that arise in each of them. There has been significant work on implementing logic programming languages in this setting and we briefly present the programming language and knowledge modelling tool GCLA II in which many of the computational prob-lems discussed arise naturally in practice. / <p>Also published as SICS Dissertation no. SICS-D-19</p>

Page generated in 0.0898 seconds