Spelling suggestions: "subject:"bnormal semantics"" "subject:"bnormal emantics""
1 |
Refactoring proofsWhiteside, Iain Johnston January 2013 (has links)
Refactoring is an important Software Engineering technique for improving the structure of a program after it has been written. Refactorings improve the maintainability, readability, and design of a program without affecting its external behaviour. In analogy, this thesis introduces proof refactoring to make structured, semantics preserving changes to the proof documents constructed by interactive theorem provers as part of a formal proof development. In order to formally study proof refactoring, the first part of this thesis constructs a proof language framework, Hiscript. The Hiscript framework consists of a procedural tactic language, a declarative proof language, and a modular theory language. Each level of this framework is equipped with a formal semantics based on a hierarchical notion of proof trees. Furthermore, this framework is generic as it does not prescribe an underlying logical kernel. This part contributes an investigation of semantics for formal proof documents, which is proved to construct valid proofs. Moreover, in analogy with type-checking, static well-formedness checks of proof documents are separated from evaluation of the proof. Furthermore, a subset of the SSReflect language for Coq, called eSSence, is also encoded using hierarchical proofs. Both Hiscript and eSSence are shown to have language elements with a natural hierarchical representation. In the second part, proof refactoring is put on a formal footing with a definition using the Hiscript framework. Over thirty refactorings are formally specified and proved to preserve the semantics in a precise way for the Hiscript language, including traditional structural refactorings, such as rename item, and proof specific refactorings such as backwards proof to forwards proof and declarative to procedural. Finally, a concrete, generic refactoring framework, called Polar, is introduced. Polar is based on graph rewriting and has been implemented with over ten refactorings and for two proof languages, including Hiscript. Finally, the third part concludes with some wishes for the future.
|
2 |
Understanding game semantics through coherence spacesCalderon, Ana C. M. A. January 2012 (has links)
No description available.
|
3 |
25 Challenges of Semantic Process ModelingMendling, Jan, Leopold, Henrik, Pittke, Fabian January 2014 (has links) (PDF)
Process modeling has become an essential part of many organizations for documenting, analyzing and redesigning their business operations and to support them with suitable information systems. In order to serve this purpose, it is important for process models to be well grounded in formal and precise semantics. While behavioural semantics of process models are well understood, there is a considerable gap of research into the semantic aspects of their text labels and natural language descriptions. The aim of this paper is to make this research gap more transparent. To this end, we clarify the role of textual content in process models and the challenges that are associated with the interpretation, analysis, and improvement of their natural language parts. More specifically, we discuss particular use cases of semantic process modeling to identify 25 challenges. For each challenge, we identify prior research and discuss directions for addressing them.
|
4 |
On the semantics of embedded questions / La sémantique des questions enchâsséesCremers, Alexandre 24 March 2016 (has links)
Suivant la proposition de Tarski (1936), la sémantique vériconditionnelle associeà une phrase déclarative des conditions de vérité. Ainsi, comprendre le sens dela phrase “Il pleut”, c’est pouvoir dire après avoir regardé par la fenêtre si elle estvraie ou fausse. Toutefois, ceci ne permet de rendre compte que des phrases déclaratives,et pas des questions puisqu’aucune situation ne rendra jamais la question“Qui a appelé ce matin ?” vraie ou fausse. Hamblin (1973) propose la premièrethéorie des questions dans le cadre de la sémantique véri-conditionnelle, et proposede leur associer des conditions de résolutions, c’est-à-dire des ensembles deréponses. Comprendre le sens de la question “Qui a appelé ce matin ?” c’est alorssavoir que “Jean a appelé” est une réponse possible, tandis que “il pleuvait” n’enest pas une.Très rapidement, l’étude de la sémantique des questions s’est tournée versles questions enchâssées dans des phrases déclaratives (questions indirectes). Eneffet, il est beaucoup plus aisé de juger des conditions de vérité d’une phrasesdéclarative que des conditions de résolution d’une question. Or moyennant deshypothèses sur la sémantique des verbes enchâssant des questions (‘savoir’, ‘oublier’.. . ), on peut relier les conditions de vérité d’une phrase déclarative au sensde la question qu’elle enchâsse. Cette approche, proposée par Karttunen (1977), adonné lieu à une littérature théorique très riche. / Two important questions arise from the recent literature on embedded questions.First, Heim (1994) proposed that embedded questions are ambiguous betweena weakly and strongly exhaustive reading. Spector (2005) recently proposedan intermediate exhaustive reading as well. Second, adverbs of quantity such as’mostly’ can quantify over answers to an embedded questions (Berman, 1991). Ananalysis of this phenomena reveals an analogy between embedded questions andplural determiner phrases, and suggests a fine-grained structures for the denotationof questions (Lahiri, 2002).The first part of the dissertation consist of three psycholinguistic studies on theexhaustive readings of questions under ‘know’ in English, the acquisition of thesereadings under ‘savoir’ by French 5-to-6-ear-olds, and the properties of emotivefactivepredicates such as ‘surprise’. The second part presents a theory of embeddedquestions built on Klinedinst and Rothschild’s (2011) proposal to derive exhaustivereadings as implicatures, although it differs in the fine-grained structureit adopts for questions denotations in order to account for plurality effects as well.The theory solves problem raised by B. R. George (2013) and makes predictions fora larger range of sentences.
|
5 |
Uma abordagem para representação de resultados formais na UML / An approach for representing formal results in the UMLPereira, Vinícius 05 June 2017 (has links)
A UML é uma notação gráfica utilizada na modelagem de sistemas orientados a objetos, em diferentes domínios da computação. Por ser simples de utilizar, em relação a outras formas de modelagem, a UML é amplamente difundida entre os desenvolvedores de software, tanto na academia quanto na indústria. Entre as suas vantagens, encontram-se: (i) a representação visual das relações entre classes e entidades, pois ao se utilizar de diagramas, a UML facilita o entendimento e a visualização das relações dentro do sistema modelado; (ii) a legibilidade e usabilidade, sem que seja necessário a leitura do código do sistema, uma vez que um desenvolvedor pode compreender quais partes do código são redundantes ou reutilizadas; e (iii) uma ferramenta de planejamento, ao auxiliar na definição do que deve ser feito, antes que a implementação comece de fato, além de poder produzir código e reduzir o tempo de desenvolvimento. Todavia, a UML possui desvantagens, tais como: (i) ambiguidade entre elementos UML diferentes, devido a sobreposição dos diagramas; e (ii) falta de uma semântica clara, o qual geralmente faz com que a semântica da linguagem de programação seja adotada. Para mitigar essas desvantagens, pesquisadores buscam atribuir uma semântica formal à UML. Esse tipo de semântica é encontrado em modelos formais, onde o sistema modelado é livre de ambiguidades e possui uma semântica clara e precisa. Por sua vez, os modelos formais não são simples de serem criados e compreendidos por desenvolvedores. O grau de conhecimento em formalismo necessário para utilizar tal modelo é alto, o que faz com que seu uso seja menos difundido, comparado com a notação gráfica não formal da UML. Apesar dos esforços dos pesquisadores, as técnicas de formalização semântica da UML apresentam, no geral, um problema pouco abordado: apesar de utilizar a UML para modelar o sistema, o artefato final dessas técnicas é um trace formal. Considerando o conhecimento comum de um desenvolvedor de software, esse trace dificulta a análise dos problemas, encontrados pelos model checkers, e a correção dos mesmos no modelo UML. Com o objetivo de auxiliar o desenvolvedor na compreensão dos resultados formais (o trace citado), esta tese de doutorado apresenta uma abordagem baseada em Model-driven Architecture (MDA) capaz de representar as informações dos resultados formais dentro de um modelo UML. Por meio de transformações do modelo UML, essas representações, definidas utilizando a abordagem, auxiliam o desenvolvedor a visualizar o fluxo de execução do model checker dentro do modelo UML. Assim, acredita-se que as vantagens obtidas pela formalização semântica da UML podem ser mais difundidas e utilizadas pelos desenvolvedores, principalmente na indústria. / UML is a graphical notation used for modeling object-oriented software systems in different domains in computer science. Being simple to use, compared to other modeling techniques, UML is widespread among software developers, both in academia and industry. Among its advantages are: (i) the visual representation of the relationships between classes and entities, as when using diagrams, UML facilitates understanding and visualization of relationships within the modeled system; (ii) readability and usability without having to read the system code, since a developer can understand which parts of the code are redundant or reusable; and (iii) a planning tool, helping to define what needs to be done before the implementation actually begins, as well as being able to produce code and reduce development time. However, the UML also has disadvantages, such as: (i) ambiguity between different UML elements due to overlapping diagrams; and (ii) lack of clear semantics, which generally causes the semantics of the programming language to be adopted. To mitigate these disadvantages, researchers seek to assign a formal semantics to the UML. This type of semantics is found in formal models, where the modeled system is free of ambiguity and has a clear and precise semantics. On the other hand, formal models are not simple to create and understand by developers. The degree of formalism knowledge required to use such a model is high, which makes their use less widespread, compared to UML non-formal graphical notation. Despite the researchers efforts, in general the techniques that formalize the UML semantics has a problem that is forgotten: although using the UML to model the system, the final artifact of these techniques is a formal trace. Considering the common knowledge of a software developer, this trace makes it difficult to analyze the problems encountered by model checkers and to correct them in the UML model. In order to assist the developer in understanding the formal results (the trace above), this thesis presents an approach based on Model-driven Architecture (MDA) capable of representing the information of the formal results in the UML model. Through UML model transformations, these representations, set using the approach, help the developer to visualize the execution flow of the model checker within the UML model. Thus, we believe that the advantages obtained by formalizing the UML semantics may be more widespread and used by developers, especially in industry.
|
6 |
Um estudo sobre a Teoria da Predição aplicada à análise semântica de Linguagens Naturais. / A study on the Theory of Prediction applied to the semantical analysis of Natural Languages.Chaer, Iúri 18 February 2010 (has links)
Neste trabalho, estuda-se o aprendizado computacional como um problema de indução. A partir de uma proposta de arquitetura de um sistema de análise semântica de Linguagens Naturais, foram desenvolvidos e testados individualmente os dois módulos necessários para a sua construção: um pré-processador capaz de mapear o conteúdo de textos para uma representação onde a semântica de cada símbolo fique explícita e um módulo indutor capaz de gerar teorias para explicar sequências de eventos. O componente responsável pela indução de teorias implementa uma versão restrita do Preditor de Solomonoff, capaz de tecer hipóteses pertencentes ao conjunto das Linguagens Regulares. O dispositivo apresenta complexidade computacional elevada e tempo de processamento, mesmo para entradas simples, bastante alto. Apesar disso, são apresentados resultados novos interessantes que mostram seu desempenho funcional. O módulo pré-processador do sistema proposto consiste em uma implementação da Análise da Semântica Latente, um método que utiliza correlações estatísticas para obter uma representação capaz de aproximar relações semânticas similares às feitas por seres humanos. Ele foi utilizado para indexar os mais de 470 mil textos contidos no primeiro disco do corpus RCV1 da Reuters, produzindo, a partir de dezenas de variações de parâmetros, 71;5GB de dados que foram utilizados para diversas análises estatísticas. Foi construído também um sistema de recuperação de informações para análises qualitativas do método. Os resultados dos testes levam a crer que o uso desse módulo de pré-processamento leva a ganhos consideráveis no sistema proposto. A integração dos dois componentes em um analisador semântico de Linguagens Naturais se mostra, neste momento, inviável devido ao tempo de processamento exigido pelo módulo indutor e permanece como uma tarefa para um trabalho futuro. No entanto, concluiu-se que a Teoria da Predição de Solomonoff é adequada para tratar o problema da análise semântica de Linguagens Naturais, contanto que sejam concebidas formas de mitigar o problema do seu tempo de computação. / In this work, computer learning is studied as a problem of induction. Starting with the proposal of an architecture for a system of semantic analisys of Natural Languages, the two modules necessary for its construction were built and tested independently: a pre-processor, capable of mapping the contents of texts to a representation in which the semantics of each symbol is explicit, and an inductor module, capable of formulating theories to explain chains of events. The component responsible for the induction of theories implements a restricted version of the Solomonoff Predictor, capable of producing hypotheses pertaining to the set of Regular Languages. Such device presents elevated computational complexity and very high processing time even for very simple inputs. Nonetheless, this work presents new and interesting results showing its functional performance. The pre-processing module of the proposed system consists of an implementation of Latent Semantic Analisys, a method which draws from statistical correlation to build a representation capable of approximating semantical relations made by human beings. It was used to index the more than 470 thousand texts contained in the first disk of the Reuters RCV1 corpus, resulting, through dozens of parameter variations, 71:5GB of data that were used for various statistical analises. The test results are convincing that the use of that pre-processing module leads to considerable gains in the system proposed. The integration of the two components built into a full-fledged semantical analyser of Natural Languages presents itself, at this moment, unachievable due to the processing time required by the inductor module, and remains as a task for future work. Still, Solomonoffs Theory of Prediction shows itself adequate for the treatment of semantical analysis of Natural Languages, provided new ways of palliating its processing time are devised.
|
7 |
Um estudo sobre a Teoria da Predição aplicada à análise semântica de Linguagens Naturais. / A study on the Theory of Prediction applied to the semantical analysis of Natural Languages.Iúri Chaer 18 February 2010 (has links)
Neste trabalho, estuda-se o aprendizado computacional como um problema de indução. A partir de uma proposta de arquitetura de um sistema de análise semântica de Linguagens Naturais, foram desenvolvidos e testados individualmente os dois módulos necessários para a sua construção: um pré-processador capaz de mapear o conteúdo de textos para uma representação onde a semântica de cada símbolo fique explícita e um módulo indutor capaz de gerar teorias para explicar sequências de eventos. O componente responsável pela indução de teorias implementa uma versão restrita do Preditor de Solomonoff, capaz de tecer hipóteses pertencentes ao conjunto das Linguagens Regulares. O dispositivo apresenta complexidade computacional elevada e tempo de processamento, mesmo para entradas simples, bastante alto. Apesar disso, são apresentados resultados novos interessantes que mostram seu desempenho funcional. O módulo pré-processador do sistema proposto consiste em uma implementação da Análise da Semântica Latente, um método que utiliza correlações estatísticas para obter uma representação capaz de aproximar relações semânticas similares às feitas por seres humanos. Ele foi utilizado para indexar os mais de 470 mil textos contidos no primeiro disco do corpus RCV1 da Reuters, produzindo, a partir de dezenas de variações de parâmetros, 71;5GB de dados que foram utilizados para diversas análises estatísticas. Foi construído também um sistema de recuperação de informações para análises qualitativas do método. Os resultados dos testes levam a crer que o uso desse módulo de pré-processamento leva a ganhos consideráveis no sistema proposto. A integração dos dois componentes em um analisador semântico de Linguagens Naturais se mostra, neste momento, inviável devido ao tempo de processamento exigido pelo módulo indutor e permanece como uma tarefa para um trabalho futuro. No entanto, concluiu-se que a Teoria da Predição de Solomonoff é adequada para tratar o problema da análise semântica de Linguagens Naturais, contanto que sejam concebidas formas de mitigar o problema do seu tempo de computação. / In this work, computer learning is studied as a problem of induction. Starting with the proposal of an architecture for a system of semantic analisys of Natural Languages, the two modules necessary for its construction were built and tested independently: a pre-processor, capable of mapping the contents of texts to a representation in which the semantics of each symbol is explicit, and an inductor module, capable of formulating theories to explain chains of events. The component responsible for the induction of theories implements a restricted version of the Solomonoff Predictor, capable of producing hypotheses pertaining to the set of Regular Languages. Such device presents elevated computational complexity and very high processing time even for very simple inputs. Nonetheless, this work presents new and interesting results showing its functional performance. The pre-processing module of the proposed system consists of an implementation of Latent Semantic Analisys, a method which draws from statistical correlation to build a representation capable of approximating semantical relations made by human beings. It was used to index the more than 470 thousand texts contained in the first disk of the Reuters RCV1 corpus, resulting, through dozens of parameter variations, 71:5GB of data that were used for various statistical analises. The test results are convincing that the use of that pre-processing module leads to considerable gains in the system proposed. The integration of the two components built into a full-fledged semantical analyser of Natural Languages presents itself, at this moment, unachievable due to the processing time required by the inductor module, and remains as a task for future work. Still, Solomonoffs Theory of Prediction shows itself adequate for the treatment of semantical analysis of Natural Languages, provided new ways of palliating its processing time are devised.
|
8 |
The Meaning of UML ModelsO'Keefe, Greg, gregokeefe@netspace.net.au January 2010 (has links)
The Unified Modelling Language (UML) is intended to express complex ideas
in an intuitive and easily understood way. It is important because it is widely
used in software engineering and other disciplines. Although an official definition
document exists, there is much debate over the precise meaning of UML models.
¶
In response, the academic community have put forward many different proposals
for formalising UML, but it is not at all obvious how to decide between
them. Indeed, given that UML practitioners are inclined to reject formalisms as
non-intuitive, it is not even obvious that the definition should be formal at all.
Rather than searching for yet another formalisation of UML, our main aim is to
determine what would constitute a good definition of UML.
¶
The first chapter sets the UML definition problem in a broad context, relating
it to work in logic and the philosophy of science. More specific conclusions about
the nature of model driven development are reached in the beginning of Chapter 2.
We then develop criteria for a definition of UML. Applying these criteria to the
existing definition, we find that it is lacking in clarity. We then set out to test the
precision of the definition. The test is to take an apparently inconsistent model, and
determine whether it really is inconsistent according to the definition.
¶
Many people have proposed that UML models are graphs, but few have justified
this choice using the official definition of UML. We begin Chapter 3 by arguing
from the official definition that UML models are graphs and that instantiation
is a graph homomorphism into an interpretation functor. The official definition of
UML defines the semantics against its abstract syntax, which is in turn defined by
a UML model. Chapters 3 and 4 prepare for our test by resolving this apparent
circularity. The result is a semantics for the metamodel fragment of the language.
¶
In Chapter 5, we find, contrary to popular belief, that the official definition does
provide sufficient semantics to classify the example model as inconsistent. Moreover,
the sustained study of the semantics in Chapters 3 to 5 confirms our initial
argument that the semantic domain is graphs. The Actions are the building blocks
of UMLs prescriptive dynamics. We see that they can be naturally defined as graph
transformation rules. Sequence diagrams are the main example of descriptive dynamics,
but we find that their official semantics are broken. The recorded history
approach should be replaced, we suggest, by a graph-oriented dynamic logic.
¶
Chapter 6 presents our early work on dynamic logic for UML sequence diagrams
and further explores the proposed semantic repairs. In Chapter 7, guided
by the criteria developed in Chapter 2, we critically survey the UML formalisation
literature and conclude that an existing body of graph transformation based work
known as dynamic metamodelling is very close to what is required.
¶
The final chapter draws together our conclusions. It proposes a category theoretic
construction to merge models of the syntax and semantic domain, yielding
a type graph for the graph transformation system which defines the dynamic semantics
of the language. Finally, it outlines the further work required to realise a
satisfactory definition of UML.
|
9 |
Towards expressive, well-founded and correct Aspect-Oriented ProgrammingSüdholt, Mario 11 July 2007 (has links) (PDF)
This thesis aims at two different goals. First, a uniform presentation of the major relevant research results on EAOP-based expressive aspects. We motivate that these instantiations enable aspects to be defined more concisely and provide better support for formal reasoning over AO programs than standard atomic approaches and other proposed non-atomic approaches. Concretely, four groups of results are presented in order to substantiate these claims: 1. The EAOP model, which features pointcuts defined over the execution history of an underly- ing base program. We present a taxonomy of the major language design issues pertaining to non-atomic aspect languages, such as pointcut expressiveness (e.g., finite-state based, turing- complete) and aspect composition mechanisms (e.g., precedence specifications vs. turing- complete composition programs). 2. Support for the formal definition of aspect-oriented programming based on different seman- tic paradigms (among others, operational semantics and denotation semantics). Furthermore, we have investigated the static analysis of interactions among aspects as well as applicability conditions for aspects. The corresponding foundational work on AOP has also permitted to investigate different weaver definitions that generalize on those used in other approaches. 3. Several instantiations of the EAOP model for aspects concerning sequential program execu- tions, in particular, for component-based and system-level programming. The former has re- sulted in formally-defined notions of aspects for the modification of component protocols, while the latter has shown, in particular that expressive aspects can be implemented in a performance- critical domain with negligible to reasonable overhead. 4. Two instantiations of the EAOP model to distributed and concurrent programming that signifi- cantly increase the abstraction level of aspect definitions by means of domain-specific abstrac- tions.
|
10 |
Reasoning About Staged ProgramsJanuary 2010 (has links)
This thesis establishes formal equational properties of multi-stage
calculi and related proof techniques that support analyses of staged
programs. A key promise of staging is to make programs efficient
without destroying clarity, thereby reducing the likelihood of bugs.
However, few publications rigorously verify that their staged
programs indeed behave as intended. In fact, little is known about
how staged programs can be verified, or what correctness issues
staging introduces. To solve this problem, I show a reduction of
the correctness of a staged program to that of an unstaged program.
This reduction not only clarifies the effects of staging on program
behavior but also eases verification, as unstaged programs are more
susceptible to existing reasoning techniques. I also demonstrate
that important single-stage reasoning techniques apply to staged
programs. These techniques are useful for establishing side
conditions for the reduction and for discovering or validating
further reasoning principles. / NSF grant CCF-0747431
|
Page generated in 0.0841 seconds