• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 284
  • 59
  • 42
  • 24
  • 9
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 2
  • 2
  • 2
  • Tagged with
  • 591
  • 591
  • 353
  • 215
  • 214
  • 194
  • 111
  • 90
  • 87
  • 82
  • 81
  • 73
  • 56
  • 55
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Conception d'un noyau de vérification de preuves pour le λΠ-calcul modulo

Boespflug, Mathieu 18 January 2011 (has links) (PDF)
Ces dernières années ont vu l'émergence d'assistants interactifs de preuves riches en fonctionnalités et d'une grande maturité d'implémentation, ce qui a permis l'essor des grosses formalisations de résultats papier et la résolution de conjectures célèbres. Mais autant d'assistants de preuves reposent sur presque autant de logiques comme fondements théoriques. Cousineau et Dowek (2007) proposent le λΠ-calcul modulo comme un cadre universel cible pour tous ces environnement de démonstration. Nous montrons dans cette thèse comment ce formalisme particulièrement simple admet une implémentation d'un vérificateur de taille modeste mais pour autant modulaire et efficace, à la correction de laquelle on peut réduire la cohérence de systèmes tout entiers. <p> Un nombre croissant de preuves dépendent de calculs intensifs comme dans la preuve du théorème des quatre couleurs de Gonthier (2007). Les méthodologies telles que SSReflect et les outils attenants privilégient les preuves contenant de nombreux petits calculs plutôt que les preuves purement déductives. L'encodage de preuves provenant d'autres systèmes dans le λΠ-calcul modulo introduit d'autres calculs encore. Nous montrons comment gérer la taille de ces calculs en interprétant les preuves tout entières comme des programmes fonctionnels, que l'on peut compiler vers du code machine à l'aide de compilateurs standards et clé-en-main. Nous employons pour cela une variante non typée de la normalisation par évaluation (NbE), et montrons comment optimiser de précédentes formulation de celle-ci. <p> Au travers d'une seule petite modification à l'interprétation des termes de preuves, nous arrivons aussi à une représentation des preuves en syntaxe abstraite d'ordre supérieur (HOAS), qui admet naturellement un algorithme de typage sans aucun contexte de typage explicite. Nous généralisons cet algorithme à tous les systèmes de types purs (PTS). Nous observons que cet algorithme est une extension à un cadre avec types dépendants de l'algorithme de typage des assistants de preuves de la famille HOL. Cette observation nous amène à développer une architecture à la LCF pour une large classe de PTS, c'est à dire une architecture où tous les termes de preuves sont corrects par construction, a priori donc, et n'ont ainsi pas besoin d'être vérifié a posteriori. Nous prouvons formellement en Coq un théorème de correspondance entre les système de types sans contexte et leur pendant standard avec contexte explicite. Ces travaux jettent un pont entre deux lignées historiques d'assistants de preuves : la lignée issue de LCF à qui nous empruntons l'architecture du noyau, et celle issue de Automath, dont nous héritons la notion de types dépendants. <p> Les algorithmes présentés dans cette thèse sont au coeur d'un nouveau vérificateur de preuves appelé Dedukti et ont aussi été transférés vers un système plus mature : Coq. En collaboration avec Dénès, nous montrons comment étendre la NbE non typée pour gérer la syntaxe et les règles de réduction du calcul des constructions inductives (CIC). En collaboration avec Burel, nous généralisons des travaux précédents de Cousineau et Dowek (2007) sur l'encodage dans le λΠ-calcul modulo d'une large classe de PTS à des PTS avec types inductifs, motifs de filtrage et opérateurs de point fixe.
492

Type Systems for Distributed Programs: Components and Sessions

Dardha, Ornela 19 May 2014 (has links) (PDF)
Modern software systems, in particular distributed ones, are everywhere around us and are at the basis of our everyday activities. Hence, guaranteeing their correctness, consistency and safety is of paramount importance. Their complexity makes the verification of such properties a very challenging task. It is natural to expect that these systems are reliable and above all usable. i) In order to be reliable, compositional models of software systems need to account for consistent dynamic reconfiguration, i.e., changing at runtime the communication patterns of a program. ii) In order to be useful, compositional models of software systems need to account for interaction, which can be seen as communication patterns among components which collaborate together to achieve a common task. The aim of the Ph.D. was to develop powerful techniques based on formal methods for the verification of correctness, consistency and safety properties related to dynamic reconfiguration and communication in complex distributed systems. In particular, static analysis techniques based on types and type systems appeared to be an adequate methodology, considering their success in guaranteeing not only basic safety properties, but also more sophisticated ones like, deadlock or livelock freedom in a concurrent setting. The main contributions of this dissertation are twofold. i) On the components side: we design types and a type system for a concurrent object-oriented calculus to statically ensure consistency of dynamic reconfigurations related to modifications of communication patterns in a program during execution time. ii) On the communication side: we study advanced safety properties related to communication in complex distributed systems like deadlock-freedom, livelock-freedom and progress. Most importantly, we exploit an encoding of types and terms of a typical distributed language, session π-calculus, into the standard typed π-calculus, in order to understand the expressive power of concurrent calculi with structured communication primitives and how they stand with respect to the standard typed concurrent calculi, namely (variants) of typed π-calculus. Then, we show how to derive in the session π-calculus basic properties, like type safety or complex ones, like progress, by encoding.
493

Integrating models and simulations of continuous dynamic system behavior into SysML

Johnson, Thomas Alex 05 May 2008 (has links)
Contemporary systems engineering problems are becoming increasingly complex as they are handled by geographically distributed design teams, constrained by the objectives of multiple stakeholders, and inundated by large quantities of design information. According to the principles of model-based systems engineering (MBSE), engineers can effectively manage increasing complexity by replacing document-centric design methods with computerized, model-based approaches. In this thesis, modeling constructs from SysML and Modelica are integrated to improve support for MBSE. The Object Management Group has recently developed the Systems Modeling Language (OMG SysML ) to provide a comprehensive set constructs for modeling many common aspects of systems engineering problems (e.g. system requirements, structures, functions). Complementing these SysML constructs, the Modelica language has emerged as a standard for modeling the continuous dynamics (CD) of systems in terms of hybrid discrete- event and differential algebraic equation systems. The integration of SysML and Modelica is explored from three different perspectives: the definition of CD models in SysML; the use of graph transformations to automate the transformation of SysML CD models into Modelica models; and the integration of CD models and other SysML models (e.g. structural, requirements) through the depiction of simulation experiments and engineering analyses. Throughout the thesis, example models of a car suspension and a hydraulically-powered excavator are used for demonstration. The core result of this work is the provision of modeling abilities that do not exist independently in SysML or Modelica. These abilities allow systems engineers to prescribe necessary system analyses and relate them to stakeholder concerns and other system aspects. Moreover, this work provides a basis for model integration which can be generalized and re-specialized for integrating other modeling formalisms into SysML.
494

VERTIPH : a visual environment for real-time image processing on hardware : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Systems Engineering at Massey University, Palmerston North, New Zealand

Johnston, Christopher Troy January 2009 (has links)
This thesis presents VERTIPH, a visual programming language for the development of image processing algorithms on FPGA hardware. The research began with an examination of the whole design cycle, with a view to identifying requirements for implementing image processing on FPGAs. Based on this analysis, a design process was developed where a selected software algorithm is matched to a hardware architecture tailor made for its implementation. The algorithm and architecture are then transformed into an FPGA suitable design. It was found that in most cases the most efficient mapping for image processing algorithms is to use a streamed processing approach. This constrains how data is presented and requires most existing algorithms to be extensively modified. Therefore, the resultant designs are heavily streamed and pipelined. A visual notation was developed to complement this design process, as both streaming and pipelining can be well represented by data flow visual languages. The notation has three views each of which represents and supports a different part of the design process. An architecture view gives an overview of the design's main blocks and their interconnections. A computational view represents lower-level details by representing each block by a set of computational expressions and low-level controls. This includes a novel visual representation of pipelining that simplifies latency analysis, multiphase design, priming, flushing and stalling, and the detection of sequencing errors. A scheduling view adds a state machine for high-level control of processing blocks. This extended state objects to allow for the priming and flushing of pipelined operations. User evaluations of an implementation of the key parts of this language (the architecture view and the computational view) found that both were generally good visualisations and aided in design (especially the type interface, pipeline and control notations). The user evaluations provided several suggestions for the improvement of the language, and in particular the evaluators would have preferred to use the diagrams as a verification tool for a textual representation rather than as the primary data capture mechanism. A cognitive dimensions analysis showed that the language scores highly for thirteen of the twenty dimensions considered, particularly those related to making details of the design clearer to the developer.
495

VERTIPH : a visual environment for real-time image processing on hardware : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Systems Engineering at Massey University, Palmerston North, New Zealand

Johnston, Christopher Troy January 2009 (has links)
This thesis presents VERTIPH, a visual programming language for the development of image processing algorithms on FPGA hardware. The research began with an examination of the whole design cycle, with a view to identifying requirements for implementing image processing on FPGAs. Based on this analysis, a design process was developed where a selected software algorithm is matched to a hardware architecture tailor made for its implementation. The algorithm and architecture are then transformed into an FPGA suitable design. It was found that in most cases the most efficient mapping for image processing algorithms is to use a streamed processing approach. This constrains how data is presented and requires most existing algorithms to be extensively modified. Therefore, the resultant designs are heavily streamed and pipelined. A visual notation was developed to complement this design process, as both streaming and pipelining can be well represented by data flow visual languages. The notation has three views each of which represents and supports a different part of the design process. An architecture view gives an overview of the design's main blocks and their interconnections. A computational view represents lower-level details by representing each block by a set of computational expressions and low-level controls. This includes a novel visual representation of pipelining that simplifies latency analysis, multiphase design, priming, flushing and stalling, and the detection of sequencing errors. A scheduling view adds a state machine for high-level control of processing blocks. This extended state objects to allow for the priming and flushing of pipelined operations. User evaluations of an implementation of the key parts of this language (the architecture view and the computational view) found that both were generally good visualisations and aided in design (especially the type interface, pipeline and control notations). The user evaluations provided several suggestions for the improvement of the language, and in particular the evaluators would have preferred to use the diagrams as a verification tool for a textual representation rather than as the primary data capture mechanism. A cognitive dimensions analysis showed that the language scores highly for thirteen of the twenty dimensions considered, particularly those related to making details of the design clearer to the developer.
496

VERTIPH : a visual environment for real-time image processing on hardware : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Systems Engineering at Massey University, Palmerston North, New Zealand

Johnston, Christopher Troy January 2009 (has links)
This thesis presents VERTIPH, a visual programming language for the development of image processing algorithms on FPGA hardware. The research began with an examination of the whole design cycle, with a view to identifying requirements for implementing image processing on FPGAs. Based on this analysis, a design process was developed where a selected software algorithm is matched to a hardware architecture tailor made for its implementation. The algorithm and architecture are then transformed into an FPGA suitable design. It was found that in most cases the most efficient mapping for image processing algorithms is to use a streamed processing approach. This constrains how data is presented and requires most existing algorithms to be extensively modified. Therefore, the resultant designs are heavily streamed and pipelined. A visual notation was developed to complement this design process, as both streaming and pipelining can be well represented by data flow visual languages. The notation has three views each of which represents and supports a different part of the design process. An architecture view gives an overview of the design's main blocks and their interconnections. A computational view represents lower-level details by representing each block by a set of computational expressions and low-level controls. This includes a novel visual representation of pipelining that simplifies latency analysis, multiphase design, priming, flushing and stalling, and the detection of sequencing errors. A scheduling view adds a state machine for high-level control of processing blocks. This extended state objects to allow for the priming and flushing of pipelined operations. User evaluations of an implementation of the key parts of this language (the architecture view and the computational view) found that both were generally good visualisations and aided in design (especially the type interface, pipeline and control notations). The user evaluations provided several suggestions for the improvement of the language, and in particular the evaluators would have preferred to use the diagrams as a verification tool for a textual representation rather than as the primary data capture mechanism. A cognitive dimensions analysis showed that the language scores highly for thirteen of the twenty dimensions considered, particularly those related to making details of the design clearer to the developer.
497

Action refinement in process algebras /

Aceto, Luca. January 1992 (has links)
Thesis (Ph. D.)--University of Sussex, 1990. / Includes bibliographical references (p. 265-271) and index.
498

Semantic Interoperability of Geospatial Ontologies: A Model-theoretic Analysis

Farrugia, James A. January 2007 (has links) (PDF)
No description available.
499

Agraphs: defini??o, implementa??o e suas ferramentas

Sena, Dem?stenes Santos de 19 May 2006 (has links)
Made available in DSpace on 2014-12-17T15:47:45Z (GMT). No. of bitstreams: 1 DemostenesSS.pdf: 468027 bytes, checksum: 7ef30fc93402336c75356410113f8a56 (MD5) Previous issue date: 2006-05-19 / Programs manipulate information. However, information is abstract in nature and needs to be represented, usually by data structures, making it possible to be manipulated. This work presents the AGraphs, a representation and exchange format of the data that uses typed directed graphs with a simulation of hyperedges and hierarchical graphs. Associated to the AGraphs format there is a manipulation library with a simple programming interface, tailored to the language being represented. The AGraphs format in ad-hoc manner was used as representation format in tools developed at UFRN, and, to make it more usable in other tools, an accurate description and the development of support tools was necessary. These accurate description and tools have been developed and are described in this work. This work compares the AGraphs format with other representation and exchange formats (e.g ATerms, GDL, GraphML, GraX, GXL and XML). The main objective this comparison is to capture important characteristics and where the AGraphs concepts can still evolve / Programas manipulam informa??es. Entretanto, as informa??es s?o essencialmente abstratas e precisam ser representadas, normalmente por estruturas de dados, permitindo a sua manipula??o. Esse trabalho apresenta os AGraphs, um formato de representa??o e transfer?ncia de dados que usa grafos direcionados tipados que permitem a simula??o de hiperarestas e de grafos hier?rquicos. Associado ao formato AGraphs existe uma biblioteca de manipula??o com uma interface simples de ser usada, mas dependente da linguagem. O formato AGraphs foi usado de maneira ad-hoc como formato de representa??o em algumas ferramentas desenvolvidas na UFRN, e, com a possibilidade de uso em outras aplica??es, tornou-se necess?ria uma defini??o precisa e o desenvolvimento de ferramentas de suporte. A defini??o precisa e as ferramentas foram desenvolvidas e s?o descritas neste trabalho. Finalizando, compara??es do formato AGraphs com outros formatos de representa??o e transfer?ncia de dados (ATerms, GDL, GraphML, GraX, GXL e XML) s?o realizadas. O principal objetivo destas compara??es ? obter as caracter?sticas significantes e em que conceitos o formato e a biblioteca AGraphs deve amadurecer
500

Uma Linguagem de Programação Paralela Orientada a Objetos para Arquiteturas Distribuídas / A Programming Language for Parallel Object-Oriented Distributed Architectures

Pinho, Eduardo Gurgel January 2012 (has links)
PINHO, Eduardo Gurgel. Uma Linguagem de Programação Paralela Orientada a Objetos para Arquiteturas Distribuídas. 2012. 71 f. : Dissertação (mestrado) - Universidade Federal do Ceará, Centro de Ciências, Departamento de Computação, Fortaleza-CE, 2012. / Submitted by guaracy araujo (guaraa3355@gmail.com) on 2016-06-21T19:17:42Z No. of bitstreams: 1 2012_dis_egpinho.pdf: 1247267 bytes, checksum: b2db45af231441771b82531797f8c819 (MD5) / Approved for entry into archive by guaracy araujo (guaraa3355@gmail.com) on 2016-06-21T19:19:30Z (GMT) No. of bitstreams: 1 2012_dis_egpinho.pdf: 1247267 bytes, checksum: b2db45af231441771b82531797f8c819 (MD5) / Made available in DSpace on 2016-06-21T19:19:30Z (GMT). No. of bitstreams: 1 2012_dis_egpinho.pdf: 1247267 bytes, checksum: b2db45af231441771b82531797f8c819 (MD5) Previous issue date: 2012 / In object-oriented programming (OOP) languages, the ability to encapsulate software concerns of the dominant decomposition in objects is the key to reaching high modularity and loss of complexity in large scale designs. However, distributed-memory parallelism tends to break modularity, encapsulation, and functional independence of objects, since parallel computations cannot be encapsulated in individual objects, which reside in a single address space. For reconciling object-orientation and distributed-memory parallelism, this work introduces OOPP (Object-Oriented Parallel Programming), a style of OOP where objects are distributed by default. As an extension of C++, a widespread language in HPC, the PObC++ language has been designed and protoyped, incorporating the ideas of OOPP / Em programação orientadas a objetos (POO) , a habilidade de encapsular interesses de software da dominante decomposição em objetos é a chave para alcançar alto nível de modularidade e diminuição de complexidade em projetos de larga escala. Entretanto, o paralelismo de memória distribuída tende a quebrar modularidade, encapsulamento e a independência de objetos, uma vez que as computações paralelas não podem ser encapsuladas em objetos individuais, os quais residem em um espaço de endereçamento único. Para reconciliar orientação a objetos e paralelismo em memória distribuída, esse trabalho introduz a PPOO (Programação Paralela Orientada a Objetos), um estilo de POO onde objetos são distribuídos por padrão. Como uma estensão do C++, uma linguagem consolidada em CAD, a linguagem PObC++ foi projetada e prototipada, incorporando as ideias da PPOO.

Page generated in 0.0587 seconds