• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 2
  • 2
  • 1
  • Tagged with
  • 23
  • 23
  • 9
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Testing equivalences and fully abstract models for communicating processes

Nicola, R. de January 1985 (has links)
No description available.
2

Substructural Logical Specifications

Simmons, Robert J. 14 November 2012 (has links)
A logical framework and its implementation should serve as a flexible tool for specifying, simulating, and reasoning about formal systems. When the formal systems we are interested in exhibit state and concurrency, however, existing logical frameworks fall short of this goal. Logical frameworks based on a rewriting interpretation of substructural logics, ordered and linear logic in particular, can help. To this end, this dissertation introduces and demonstrates four methodologies for developing and using substructural logical frameworks for specifying and reasoning about stateful and concurrent systems. Structural focalization is a synthesis of ideas from Andreoli’s focused sequent calculi and Watkins’s hereditary substitution. We can use structural focalization to take a logic and define a restricted form of derivations, the focused derivations, that form the basis of a logical framework. We apply this methodology to define SLS, a logical framework for substructural logical specifications, as a fragment of ordered linear lax logic. Logical correspondence is a methodology for relating and inter-deriving different styles of programming language specification in SLS. The styles we connect range from very high-level specification styles like natural semantics, which do not fully specify the control structure of programs, to low-level specification styles like destination-passing, which provide detailed control over concurrency and control flow. We apply this methodology to systematically synthesize a low-level destination-passing semantics for a Mini-ML language extended with stateful and concurrent primitives. The specification is mostly high-level except for the relatively few rules that actually deal with concurrency. Linear logical approximation is a methodology for deriving program analyses by performing abstract analysis on the SLS encoding of the language’s operational semantics. We demonstrate this methodology by deriving a control flow analysis and an alias analysis from suitable programming language specifications. Generative invariants are a powerful generalization of both context-free grammars and LF’s regular worlds that allow us to express invariants of SLS specifications in SLS.We show that generative invariants can form the basis of progress-andpreservation- style reasoning about programming languages encoded in SLS.
3

Javalite - An Operational Semantics for Modeling Java Programs

Wesonga, Saint Oming'o 04 November 2012 (has links) (PDF)
Java is currently a widely used programming language. However, there is no formal definition of Java's semantics. Consequently, Java code does not have a universal meaning. This work discusses recent attempts to formalize Java and presents a new formalism of Java called Javalite. In contrast to common approaches to formalization, Javalite is purely syntactic in its definition. Syntactic operational semantics use the structure of the language to define its behavior. Javalite models most Java features with notable exceptions being threads, reflection, and interfaces. This work presents an executable semi-formal model of Javalite in PLT Redex. Being executable means that Javalite programs can be run using this model. We then render the semi-formal model in the Coq theorem prover and present theorems stating that the operational semantics are decidable and deterministic. This formal model can then be used to facilitate research in areas such as proving properties of algorithms that perform various analyses on Java code, e.g. verification, optimization, and refactoring.
4

Psi-calculi: a framework for mobile process calculi : Cook your own correct process calculus - just add data and logic

Johansson, Magnus January 2010 (has links)
A psi-calculus is an extension of the pi-calculus with nominal data types for data structures, logical assertions, and conditions. These can be transmitted between processes and their names can be statically scoped as in the standard pi-calculus. Expressiveness and therefore modelling convenience significantly exceed those of other formalisms: psi-calculi can capture the same phenomena as other extensions of the pi-calculus, and can be more general, e.g. by allowing structured channels, higher-order formalisms such as the lambda calculus for data structures, and predicate logic for assertions. Ample comparisons to related calculi are provided and a few significant applications are discussed. The labelled operational semantics and definition of bisimulation is straightforward, without a structural congruence. Minimal requirements on the nominal data and logic are established in order to prove general algebraic properties of psi-calculi. The purity of the semantics is on par with the pi-calculus. The theory of weak bisimulation is established, where the tau actions are unobservable. The notion of barb is defined as the output label of a communication action, and weak barbed equivalence is bisimilarity for tau actions and preservation of weak barbs in all static contexts. It is proved that weak barbed equivalence coincides with weak bisimulation equivalence. A symbolic transition system and bisimulation equivalence is presented, and shown fully abstract with respect to bisimulation congruence in the non-symbolic semantics. The symbolic semantics is necessary for an efficient implementation of the calculus in automated tools exploring state spaces, and the full abstraction property means processes are bisimilar in the symbolic setting if they are bisimilar in the original semantics. Psi-calculi remove the necessity of proving general properties every time a new mobile process calculus is defined. By properly instantiating the framework the properties are guaranteed to hold, thus removing the need to do tedious and error-prone proofs.
5

Analysis Techniques for Concurrent Programming Languages

Tamarit Muñoz, Salvador 02 September 2013 (has links)
Los lenguajes concurrentes est an cada d a m as presentes en nuestra sociedad, tanto en las nuevas tecnolog as como en los sistemas utilizados de manera cotidiana. M as a un, dada la actual distribuci on de los sistemas y su arquitectura interna, cabe esperar que este hecho siga siendo una realidad en los pr oximos a~nos. En este contexto, el desarrollo de herramientas de apoyo al desarrollo de programas concurrentes se vuelve esencial. Adem as, el comportamiento de los sistemas concurrentes es especialmente dif cil de analizar, por lo que cualquier herramienta que ayude en esta tarea, a un cuando sea limitada, ser a de gran utilidad. Por ejemplo, podemos encontrar herramientas para la depuraci on, an alisis, comprobaci on, optimizaci on, o simpli caci on de programas. Muchas de ellas son ampliamente utilizadas por los programadores hoy en d a. El prop osito de esta tesis es introducir, a trav es de diferentes lenguajes de programaci on concurrentes, t ecnicas de an alisis que puedan ayudar a mejorar la experiencia del desarrollo y publicaci on de software para modelos concurrentes. En esta tesis se introducen tanto an alisis est aticos (aproximando todas las posibles ejecuciones) como din amicos (considerando una ejecuci on en concreto). Los trabajos aqu propuestos di eren lo su ciente entre s para constituir ideas totalmente independientes, pero manteniendo un nexo com un: el hecho de ser un an alisis para un lenguaje concurrente. Todos los an alisis presentados han sido de nidos formalmente y se ha probado su correcci on, asegurando que los resultados obtenidos tendr an el grado de abilidad necesario en sistemas que lo requieran, como por ejemplo, en sistemas cr ticos. Adem as, se incluye la descripci on de las herramientas software que implementan las diferentes ideas propuestas. Esto le da al trabajo una utilidad m as all a del marco te orico, permitiendo poner en pr actica y probar con ejemplos reales los diferentes an alisis. Todas las ideas aqu presentadas constituyen, por s mismas, propuestas aplicables en multitud de contextos y problemas actuales. Adem as, individualmente sirven de punto de partida para otros an alisis derivados, as como para la adaptaci on a otros lenguajes de la misma familia. Esto le da un valor a~nadido a este trabajo, como bien atestiguan algunos trabajos posteriores que ya se est an bene ciando de los resultados obtenidos en esta tesis. / Concurrent languages are increasingly present in our society, both in new technologies and in the systems used on a daily basis. Moreover, given the current systems distribution and their internal architecture, one can expect that this remains so in the coming years. In this context, the development of tools to support the implementation of concurrent programs becomes essential. Futhermore, the behavior of concurrent systems is particularly difficult to analyse, so that any tool that helps in this task, even if in a limited way, will be very useful. For example, one can find tools for debugging, analysis, testing, optimisation, or simplification of programs, which are widely used by programmers nowadays. The purpose of this thesis is to introduce, through various concurrent programming languages, some analysis techniques that can help to improve the experience of the software development and release for concurrent models. This thesis introduces both static (approximating all possible executions) and dynamic (considering a specific execution) analysis. The topics considered here differ enough from each other to be fully independent. Nevertheless, they have a common link: they can be used to analyse properties of a concurrent programming language. All the analyses presented here have been formally defined and their correctness have been proved, ensuring that the results will have the reliability degree which is needed for some systems (for instance, for critical systems). It also includes a description of the software tools that implement the different ideas proposed. This gives the work a usefulness well beyond the theoretical aspect, allowing us to put it in practice and to test the different analyses with real-world examples All the ideas here presented are, by themselves, approaches that can be applied in many current contexts and problems. Moreover, individually they serve as a starting point for other derived analysis, as well as for the adaptation to other languages of the same family. This gives an added value to this work, a fact confirmed by some later works that are already benefiting from the results obtained in this thesis. / Tamarit Muñoz, S. (2013). Analysis Techniques for Concurrent Programming Languages [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31651 / TESIS
6

Deterministic Concurrency Using Lattices and the Object Capability Model / Determinism i parallelliserade program med hjälp av gitterstrukturer och objektsförmågor

Arvidsson, Ellen January 2018 (has links)
Parallelization is an important part of modern data systems. However, the non-determinism of thread scheduling introduces the difficult problem of considering all different execution orders when constructing an algorithm. Therefore deterministic-by-design concurrent systems are attractive. A new approach called LVars consists of using data which is part of a lattice, with a predefined join operation. Updates to shared data are carried out using the join operation and thus the updates commute. Together with limiting the reads of shared data, this guarantees a deterministic result. The Reactive Async framework follows a similar approach but has several aspects which can cause a non-deterministic result. The goal with this thesis is to explore how we can ammend Reactive Async in order to guarantee a deterministic result. First an exploration into the subtleties of lattice based data combined with concurrency is made. Then a formal model based on a simple object-oriented language is constructed. The constructed small-step operational semantics and type system are shown to guarantee a form of determinism. This shows that LVars-similar system can be implemented in an object-oriented setting. Furthermore the work can act as a basis for future revisions of Reactive Async and similar frameworks. / Parallellisering är en viktig del i moderna datasystem. Flertrådade applikationer innebär dock en svårighet i och med att programmerare måste ta alla exekveringsordningar i beaktning. Därför är beräkningsmodeller vars resultat är garanterat deterministiskt en attraktiv utväg. En ny modell, kallad LVars, använder gitterstrukturer tillsammans med en supremum-operation för att garantera att uppdateringar av delad data kommuterar. Detta tillsammans med begränsningar av läsning av datan garanterar ett deterministiskt resultat. Reactive Async är ett programmeringsramverk som följer en liknande strategi. Det finns dock flera delar i dess konstruktion som i en oförsiktig programmerares händer kan orsaka att ett programs resultat blir icke-deterministiskt. Målet med detta examensarbete är att utforska vilka modifikationer som skulle kunna göras av Reactive Async för att garantera determinism. Först görs en undersökning av de mer svårförståeliga delarna i kombinationen av gitterbaserad data med flertrådad exekvering. Sedan konstrueras en formell beräkningsmodell baserad på ett enkelt objektorienterat språk. Konstruktionens småstegade operationella semantik tillsammans med dess typsystem visas kunna garantera en form av determinism. Detta visar att ett system liknande LVars kan implementeras i ett objektorienterat språk. Därmed skulle detta arbete kunna ligga till grund för framtida versioner av Reactive Async.
7

Eager, Lazy, and Other Executions for Predicative Programming

Lai, Yu Cheong Albert 08 August 2013 (has links)
Many programs are executed according to the conventional, eager execution order, for which verification of execution costs is well-understood. However, there are other execution orders in use. One such order in common use is lazy execution or lazy evaluation, which is mostly demand-driven. Laziness supports better decompositions of algorithms, e.g., into modular producers and consumers, which enables compositional reasoning of answer correctness, but then timing correctness is more elusive. This thesis gives a formal method for verifying lazy timing, compositional with respect to program structure; it is an extension of a predicative programming theory. Predicative programming theories are formal methods that unify both specifications and programs as predicates or boolean-typed expressions over memory state and other quantities of interest. Their strengths are mathematical simplicity and support of program development and verification by incremental refinements. Among these theories, Hehner's a Practical Theory of Programming has the further strength of leaving termination and timing open rather than a built-in, and therefore is a flexible substrate for various timing schemes corresponding to various execution strategies. We use this substrate for our method for lazy timing. This thesis also proves soundness of the eager timing scheme in Hehner's work with respect to an eager operational semantics, and our lazy timing scheme with respect to a lazy operational semantics. Thus, if refinements promise an upper time bound, then execution actually stops within that time. Lastly, this thesis outlines a space of more operational semantics. It is possible ground for more execution strategies.
8

Eager, Lazy, and Other Executions for Predicative Programming

Lai, Yu Cheong Albert 08 August 2013 (has links)
Many programs are executed according to the conventional, eager execution order, for which verification of execution costs is well-understood. However, there are other execution orders in use. One such order in common use is lazy execution or lazy evaluation, which is mostly demand-driven. Laziness supports better decompositions of algorithms, e.g., into modular producers and consumers, which enables compositional reasoning of answer correctness, but then timing correctness is more elusive. This thesis gives a formal method for verifying lazy timing, compositional with respect to program structure; it is an extension of a predicative programming theory. Predicative programming theories are formal methods that unify both specifications and programs as predicates or boolean-typed expressions over memory state and other quantities of interest. Their strengths are mathematical simplicity and support of program development and verification by incremental refinements. Among these theories, Hehner's a Practical Theory of Programming has the further strength of leaving termination and timing open rather than a built-in, and therefore is a flexible substrate for various timing schemes corresponding to various execution strategies. We use this substrate for our method for lazy timing. This thesis also proves soundness of the eager timing scheme in Hehner's work with respect to an eager operational semantics, and our lazy timing scheme with respect to a lazy operational semantics. Thus, if refinements promise an upper time bound, then execution actually stops within that time. Lastly, this thesis outlines a space of more operational semantics. It is possible ground for more execution strategies.
9

Interface adaptation for conversational services

Wang, Kenneth W.S. January 2008 (has links)
The proliferation of services on the web is leading to the formation of service ecosystems wherein services interact with one another in ways not foreseen during their development or deployment. This means that over its lifetime, a service is likely to be reused across multiple interactions, such that in each of them a different interface is required from it. Implementing, testing, deploying, and maintaining adapters to deal with this multiplicity of required interfaces can be costly and error-prone. The problem is compounded in the case of services that do not follow simple request-response interactions, but instead engage in conversations comprising arbitrary patterns of message exchanges. A key challenge in this setting is service mediation: the act of retrofitting existing services by intercepting, storing, transforming, and (re-)routing messages going into and out of these services so they can interact in ways not originally foreseen. This thesis addresses one aspect of service mediation, namely service interface adaptation. This problem arises when the interface that a service provides does not match the interface that it is expected to provide in a given interaction. Specifically, the thesis focuses on the reconciliation of mismatches between behavioural interfaces, that is, interfaces that capture ordering constraints between message exchanges. We develop three complementary proposals. Firstly, we propose a visual language for specifying adapters for conversational services. The language is based on a an algebra of operators that are composed to define links between provided-required interfaces. These expressions are fed into an execution engine that intercepts, buffers, transforms and forwards messages to enact the adapter specification. Secondly, we endow such adapter specifications with a formal semantics defined in terms of Petri nets. The formal semantics is used to statically check the correctness of adapter specifications. Finally, we propose an alternative approach to service interface adaptation that does not require hard-wired links between provided and required interfaces. This alternative approach is based on the definition of mapping rules between message types, and is embodied in an adaptation machine. The adaptation machine sits between pairs of services and manipulates the exchanged messages according to a repository of mapping rules. The adaptation machine is also able to detect deadlocks and information loss at runtime.
10

Contribution à une méthode outillée pour la conception de langages de modélisation métier interopérables, analysables et prouvables pour l'Ingénierie Système basée sur des Modèles / Contribution to an equipped approach for the design of executable, verifiable and interoperable Domain Specific Modelling Languages for Model Based Systems Engineering

Nastov, Blazo 15 November 2016 (has links)
L'Ingénierie des Systèmes (IS) est une approche pluridisciplinaire et collaborative pour mener à bâtir et structurer la conception puis la réalisation et le développement de systèmes complexes. L’IS repose à la fois sur une approche processus et sur la mise en oeuvre de modèles de systèmes s'appuyant de fait dans un contexte basé ou dirigé par des modèles. On parle alors d’Ingénierie Système Basée sur des Modèles (ISBM ou Model based Systems Engineering MBSE). L’ISBM introduit des concepts, méthodes et techniques pour construire et gérer des modèles. Elle a pour objectif l’atteinte et l’amélioration de leur qualité afin de procurer aux parties prenantes un degré de confiance jugé suffisant pour aider la prise des décisions de conception, d'amélioration et de réalisation. Ces décisions conditionnent le fonctionnement, la sûreté, la sécurité, les coûts, et plus généralement tout un ensemble de propriétés attendues à la fois du modèle comme du système modélisé, tout au long de la phase aval de l’ingénierie et de développement, jusqu’à la réalisation et au déploiement du système. La qualité des modèles est obtenue au travers des processus de Vérification et Validation (V&V). Les objectifs sont alors d’assurer que les modèles soient cohérents, bien formés, bien construits et représentés correctement. En effet, aux yeux des parties prenantes, les modèles doivent être fiables, fidèles et pertinents au regard des besoins des concepteurs, représentant aussi précisément que possible le point de vue du système en cours de conception. Des langages de modélisation dit « métier » (Domain Specific Modelling Languages ou DSML) sont spécifiquement créés pour pouvoir fournir des représentations i.e. des modèles dans les différents points de vue sur le système. Un DSML est basé sur une syntaxe et sur une sémantique. La sémantique de ces langages est en général fournie par des approches externes (vérificateurs de modèles). Ces dernières sont, à notre sens, une limitation clé pour le déploiement des stratégies de V&V dans le contexte de l’ISBM. En réponse à cette limitation, la contribution conceptuelle de cette thèse est présentée sous la forme d’un nouveau langage de métamodélisation, nommé xviCore (noyau exécutable, vérifiable et interopérable). xviCore fournit les concepts et les principes pour définir puis vérifier et valdier la syntaxe et la sémantique en phase de construction de tels DSML en combinant trois métalangages : un métalangage orienté objet pour la conception de la partie syntaxique, un métalangage pour la conception du comportement et un métalangage pour la conception de propriétés formelles. La contribution méthodologique de ces travaux permet ensuite le déploiement d’une stratégie de V&V «directe» en lieu et place des traditionnelles approches externes. Elle est basée sur la simulation et la preuve formelle de propriétés. Le mécanisme de simulation permet d’observer le comportement des modèles de systèmes au travers de leur exécution, tandis que le mécanisme de preuve permet de spécifier et ensuite de vérifier des propriétés formelles. La contribution technique se compose d’un ensemble des plugins Eclipse qui implémentent le métalangage xviCore, le mécanisme de simulation et le mécanisme de la preuve formelle. / Systems Engineering (SE) is an interdisciplinary and collaborative approach for successful design and management of large scale complex systems. Among other principles, SE promotes and mandates a model-based (or model-driven) approach for all stages of system design processes, denoted Model-Based Systems Engineering (MBSE). This implies concepts, techniques and tools for creating and managing various systems models for the purpose of stakeholders, and for reaching and improving the quality of models helping then stakeholders during decision-making processes, to make decisions faster and efficiently with enough confidence. Indeed, these decisions impact all along the downstream phases of system engineering and development until the realization and deployment of the real system, its functioning, safety, security, induced costs and so on. In this work, a particular attention is given to model verification and validation (V&V). The goals are to assure prior to decision-making processes, first, that models are coherent, well-formed and correctly build and represented, and second, that they are trustworthy and relevant, representing as accurately as possible the viewpoints of a system under design as expected by stakeholders.Such models provide stakeholders with confidence and trust, aiding them in making, but also in arguing decisions. Models are created by using modeling languages that are specifically tailored for a given viewpoint of a system, denoted Domain Specific Modeling Languages (DSMLs).The basic principles on which a DSML is based are its syntax and its semantics, but current DSMLs have been more studied from the syntactical point than from the semantical one that is often neglected or, when needed, provided by means of translating the DSML into third party formalisms. This is the key limitation preventing the deployment of a successful V&V strategy in MBSE context. To overcome this shortcoming, this thesis proposes first a conceptual contribution consisting of a new metamodeling language, called eXecutable, Verifiable and Interoperable Core (xviCore), allowing stakeholders to build DSMLs (called xviDSMLs), that along with their syntax also integrates semantics. Our solution combines, three meta-languages, an object-oriented metamodeling language for the specification of the syntactical part with a formal behavioral modeling language and a property modeling language for the semantical part. The methodological contribution of this work allows the deployment of successful V&V strategies allowing for direct (without transformation) model verification by simulation and properties proof. We propose a mechanism to simulate the expected behavior of a SoI through model execution based on the blackboard-based communication model, and a mechanism for specification and verification of formal properties. The technical contribution consists of an Eclipse-EMF deployable plug-in that implements the metamodeling language xviCore and the mechanisms for simulation and formal property verification.

Page generated in 0.1544 seconds