• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 8
  • 8
  • 8
  • 5
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Game semantics for an object-oriented language

Wolverson, Nicholas January 2009 (has links)
This thesis investigates the relationship between object-oriented programming languages and game models of computation. These are intuitively well matched: an object encapsulates some internal state and presents some behaviour to the world via its publicly visible methods, while a strategy for some game represents the possible interactions of a program with its environment. We work with a simple and well-understood game model. Rather than tailoring our model to match some existing programming language, we view the simplicity of our semantic setting as a virtue, and try to find the appropriate language corresponding to the model. We define a class-based, stateful object-oriented language, and give a heapbased operational semantics and an interpretation in our game model. At the heart of this interpretation lies a novel semantic treatment of the phenomenon of data abstraction. The model closely guides the design of our language, which enjoys an intermediate level of expressivity between that of first-order and general higher-order store. The agreement between the operational and game interpretations is verified by a soundness proof. This involves the development of specialised techniques and a detailed analysis of the relationship between the concrete and abstract views. We also show that definability and full abstraction hold at certain types of arbitrary rank, but are problematic at other types. We conclude by briefly discussing an extended language with a control operator, along with other extensions leading to a possible core for a more realistic programming language.
2

Recursive probabilistic models : efficient analysis and implementation

Wojtczak, Dominik January 2009 (has links)
This thesis examines Recursive Markov Chains (RMCs), their natural extensions and connection to other models. RMCs can model in a natural way probabilistic procedural programs and other systems that involve recursion and probability. An RMC is a set of ordinary finite state Markov Chains that are allowed to call each other recursively and it describes a potentially infinite, but countable, state ordinary Markov Chain. RMCs generalize in a precise sense several well studied probabilistic models in other domains such as natural language processing (Stochastic Context-Free Grammars), population dynamics (Multi-Type Branching Processes) and in queueing theory (Quasi-Birth-Death processes (QBDs)). In addition, RMCs can be extended to a controlled version called Recursive Markov Decision Processes (RMDPs) and also a game version referred to as Recursive (Simple) Stochastic Games (RSSGs). For analyzing RMCs, RMDPs, RSSGs we devised highly optimized numerical algorithms and implemented them in a tool called PReMo (Probabilistic Recursive Models analyzer). PReMo allows computation of the termination probability and expected termination time of RMCs and QBDs, and a restricted subset of RMDPs and RSSGs. The input models are described by the user in specifically designed simple input languages. Furthermore, in order to analyze the worst and best expected running time of probabilistic recursive programs we study models of RMDPs and RSSGs with positive rewards assigned to each of their transitions and provide new complexity upper and lower bounds of their analysis. We also establish some new connections between our models and models studied in queueing theory. Specifically, we show that (discrete time) QBDs can be described as a special subclass of RMCs and Tree-like QBDs, which are a generalization of QBDs, are equivalent to RMCs in a precise sense. We also prove that for a given QBD we can compute (in the unit cost RAM model) an approximation of its termination probabilities within i bits of precision in time polynomial in the size of the QBD and linear in i. Specifically, we show that we can do this using a decomposed Newton’s method.
3

Program transformations in weak memory models

Sevcik, Jaroslav January 2009 (has links)
We analyse the validity of common optimisations on multi-threaded programs in two memory models—the DRF guarantee and the Java Memory Model. Unlike in the single-threaded world, even simple program transformations, such as common subexpression elimination, can introduce new behaviours in shared-memory multi-threading with an interleaved semantics. To validate such optimisations, most current programming languages define weaker semantics, called memory models, that aim to allow such transformations while providing reasonable guarantees. In this thesis, we consider common program transformations and analyse their safety in the two most widely used language memory models: (i) the DRF guarantee, which promises sequentially consistent behaviours for data race free programs, and (ii) the Java Memory Model, which is the semantics of multithreaded Java. The DRF guarantee is the semantics of Ada and it has been proposed as the semantics of the upcoming revision of C++. Our key results are: (i) we prove that a large class of elimination and reordering transformations satisfies the DRF guarantee; (ii) we find that the Java language is more restrictive—despite the claims in the specification, the Java Memory Model does not allow some important program transformations, such as common subexpression elimination. To establish the safety results, we develop a trace semantic framework and describe important program optimisations as relations on sets of traces. We prove that all our elimination and reordering transformations satisfy the DRF guarantee, i.e., the semantic transformations cannot introduce new behaviours for data race free programs. Moreover, we prove that all the transformations preserve data race freedom. This ensures safety of transformations composed from eliminations and reorderings. In addition to the DRF guarantee, we prove that for arbitrary programs, our transformations prevent values appearing “outof- thin-air”—if a program does not contain constant c and does not perform any operation that could create c, then no transformation of the program can output c. We give an application of the semantic framework to a concrete language and prove safety of several simple syntactic transformations. We employ similar semantic techniques to prove validity of several classes of transformations, such as the elimination of an overwritten write or reordering of independent memory accesses, in the Java Memory Model. To establish the iii negative results for the Java Memory Model, we provide counterexamples showing that several common optimisations can introduce new behaviours and thus are not safe.
4

Type-based amortized stack memory prediction

Campbell, Brian January 2008 (has links)
Controlling resource usage is important for the reliability, efficiency and security of software systems. Automated analyses for bounding resource usage can be invaluable tools for ensuring these properties. Hofmann and Jost have developed an automated static analysis for finding linear heap space bounds in terms of the input size for programs in a simple functional programming language. Memory requirements are amortized by representing them as a requirement for an abstract quantity, potential, which is supplied by assigning potential to data structures in proportion to their size. This assignment is represented by annotations on their types. The type system then ensures that all potential requirements can be met from the original input’s potential if a set of linear constraints can be solved. Linear programming can optimise this amount of potential subject to the constraints, yielding a upper bound on the memory requirements. However, obtaining bounds on the heap space requirements does not detect a faulty or malicious program which uses excessive stack space. In this thesis, we investigate extending Hofmann and Jost’s techniques to infer bounds on stack space usage, first by examining two approaches: using the Hofmann- Jost analysis unchanged by applying a CPS transformation to the program being analysed, then showing that this predicts the stack space requirements of the original program; and directly adapting the analysis itself, which we will show is more practical. We then consider how to deal with the different allocation patterns stack space usage presents. In particular, the temporary nature of stack allocation leads us to a system where we calculate the total potential after evaluating an expression in terms of assignments of potential to the variables appearing in the expression as well as the result. We also show that this analysis subsumes our previous systems, and improves upon them. We further increase the precision of the bounds inferred by noting the importance of expressing stack memory bounds in terms of the depth of data structures and by taking the maximum of the usage bounds of subexpressions. We develop an analysis which uses richer definitions of the potential calculation to allow depth and maxima to be used, albeit with a more subtle inference process.
5

From relations to XML : cleaning, integrating and securing data

Jia, Xibei January 2008 (has links)
While relational databases are still the preferred approach for storing data, XML is emerging as the primary standard for representing and exchanging data. Consequently, it has been increasingly important to provide a uniform XML interface to various data sources— integration; and critical to protect sensitive and confidential information in XML data — access control. Moreover, it is preferable to first detect and repair the inconsistencies in the data to avoid the propagation of errors to other data processing steps. In response to these challenges, this thesis presents an integrated framework for cleaning, integrating and securing data. The framework contains three parts. First, the data cleaning sub-framework makes use of a new class of constraints specially designed for improving data quality, referred to as conditional functional dependencies (CFDs), to detect and remove inconsistencies in relational data. Both batch and incremental techniques are developed for detecting CFD violations by SQL efficiently and repairing them based on a cost model. The cleaned relational data, together with other non-XML data, is then converted to XML format by using widely deployed XML publishing facilities. Second, the data integration sub-framework uses a novel formalism, XML integration grammars (XIGs), to integrate multi-source XML data which is either native or published from traditional databases. XIGs automatically support conformance to a target DTD, and allow one to build a large, complex integration via composition of component XIGs. To efficiently materialize the integrated data, algorithms are developed for merging XML queries in XIGs and for scheduling them. Third, to protect sensitive information in the integrated XML data, the data security sub-framework allows users to access the data only through authorized views. User queries posed on these views need to be rewritten into equivalent queries on the underlying document to avoid the prohibitive cost of materializing and maintaining large number of views. Two algorithms are proposed to support virtual XML views: a rewriting algorithm that characterizes the rewritten queries as a new form of automata and an evaluation algorithm to execute the automata-represented queries. They allow the security sub-framework to answer queries on views in linear time. Using both relational and XML technologies, this framework provides a uniform approach to clean, integrate and secure data. The algorithms and techniques in the framework have been implemented and the experimental study verifies their effectiveness and efficiency.
6

Inteligência de máquina : esboço de uma abordagem construtivista

Costa, Antonio Carlos da Rocha January 1993 (has links)
Este trabalho introduz uma definição para noção de inteligência de máquina, estabelece a possibilidade concreta dessa definição e fornece indicações sobre a sua necessidade - isto e, dá-lhe um conteúdo objetivo e mostra o interesse e a utilidade que a definição pode ter para a ciência da computação, em geral, e para a inteligência artificial, em particular. Especificamente, toma-se uma particular leitura da definição de inteligência dada por J. Piaget e se estabelecem as condições para que essa definição possa ser interpretada no domínio das máquinas. Para tanto, uma revisão das noções fundamentais da ciência da computação se faz necessária, a fim de explicitar os aspectos dinâmicos de variabilidade, controlabilidade e adaptabilidade subjacentes a tais conceitos (maquina, programa, computação, e organização, rege adaptação de rnáquina). Por outro lado, urna, mudança de atitude face aos objetivos da inteligência, artificial também e requerida. A definição dada supõe que se reconheça, a autonomia operacional das maquinas, e isso leva, a abandonar, ou pelo menos a colocar em segundo piano, o ponto de vista que chamamos de artificialismo - a busca da imitação do comportamento inteligente de seres humanos ou animais - e a adotar o ponto de vista que denominamos de naturalismo - a consideração da inteligência de maquina como fenômeno natural nas maquinas, digno de ser estudado em si próprio. 0 trabalho apresenta os resultados da reflexão através da qual se tentou realizar tais objetivos. / This work introduces a definition for the notion of machine intelligence, establishes the concrete possibility of that definition and gives indications on its necessity - that is, it gives that notion an objective content and shows the interest and utility that the definition may have to computer science, in general, and artificial intelligence, in particular. Specifically, we take a particular reading of the definition of intelligence given by J. Piaget, and we establish the conditions under which that definition can be interpreted in the domain of machines. To achieve this, a revision of the fundamental notions of computer science was necessary, in order to make explicit the dynamical aspects of variability, controlability and adaptability that are underlying those concepts (machine, program, computation, and machine organization, regulation and adaptation). On the other hand, a change in the attitude toward the objetives of artificial intelligence was also required. The given definition pressuposes that one recognizes the operational autonomy of machines, and this implies abandonning the point of view we call artificialism - the search for the imitation of the intelligent behavior of human beings and animals - and adopting the point of view that we call naturalism - which considers that machine intelligence is a natural phenomenon in machines, that should be studied by its own. The work presents the results of the reflexion through which we tried to realize those goals.
7

Inteligência de máquina : esboço de uma abordagem construtivista

Costa, Antonio Carlos da Rocha January 1993 (has links)
Este trabalho introduz uma definição para noção de inteligência de máquina, estabelece a possibilidade concreta dessa definição e fornece indicações sobre a sua necessidade - isto e, dá-lhe um conteúdo objetivo e mostra o interesse e a utilidade que a definição pode ter para a ciência da computação, em geral, e para a inteligência artificial, em particular. Especificamente, toma-se uma particular leitura da definição de inteligência dada por J. Piaget e se estabelecem as condições para que essa definição possa ser interpretada no domínio das máquinas. Para tanto, uma revisão das noções fundamentais da ciência da computação se faz necessária, a fim de explicitar os aspectos dinâmicos de variabilidade, controlabilidade e adaptabilidade subjacentes a tais conceitos (maquina, programa, computação, e organização, rege adaptação de rnáquina). Por outro lado, urna, mudança de atitude face aos objetivos da inteligência, artificial também e requerida. A definição dada supõe que se reconheça, a autonomia operacional das maquinas, e isso leva, a abandonar, ou pelo menos a colocar em segundo piano, o ponto de vista que chamamos de artificialismo - a busca da imitação do comportamento inteligente de seres humanos ou animais - e a adotar o ponto de vista que denominamos de naturalismo - a consideração da inteligência de maquina como fenômeno natural nas maquinas, digno de ser estudado em si próprio. 0 trabalho apresenta os resultados da reflexão através da qual se tentou realizar tais objetivos. / This work introduces a definition for the notion of machine intelligence, establishes the concrete possibility of that definition and gives indications on its necessity - that is, it gives that notion an objective content and shows the interest and utility that the definition may have to computer science, in general, and artificial intelligence, in particular. Specifically, we take a particular reading of the definition of intelligence given by J. Piaget, and we establish the conditions under which that definition can be interpreted in the domain of machines. To achieve this, a revision of the fundamental notions of computer science was necessary, in order to make explicit the dynamical aspects of variability, controlability and adaptability that are underlying those concepts (machine, program, computation, and machine organization, regulation and adaptation). On the other hand, a change in the attitude toward the objetives of artificial intelligence was also required. The given definition pressuposes that one recognizes the operational autonomy of machines, and this implies abandonning the point of view we call artificialism - the search for the imitation of the intelligent behavior of human beings and animals - and adopting the point of view that we call naturalism - which considers that machine intelligence is a natural phenomenon in machines, that should be studied by its own. The work presents the results of the reflexion through which we tried to realize those goals.
8

Inteligência de máquina : esboço de uma abordagem construtivista

Costa, Antonio Carlos da Rocha January 1993 (has links)
Este trabalho introduz uma definição para noção de inteligência de máquina, estabelece a possibilidade concreta dessa definição e fornece indicações sobre a sua necessidade - isto e, dá-lhe um conteúdo objetivo e mostra o interesse e a utilidade que a definição pode ter para a ciência da computação, em geral, e para a inteligência artificial, em particular. Especificamente, toma-se uma particular leitura da definição de inteligência dada por J. Piaget e se estabelecem as condições para que essa definição possa ser interpretada no domínio das máquinas. Para tanto, uma revisão das noções fundamentais da ciência da computação se faz necessária, a fim de explicitar os aspectos dinâmicos de variabilidade, controlabilidade e adaptabilidade subjacentes a tais conceitos (maquina, programa, computação, e organização, rege adaptação de rnáquina). Por outro lado, urna, mudança de atitude face aos objetivos da inteligência, artificial também e requerida. A definição dada supõe que se reconheça, a autonomia operacional das maquinas, e isso leva, a abandonar, ou pelo menos a colocar em segundo piano, o ponto de vista que chamamos de artificialismo - a busca da imitação do comportamento inteligente de seres humanos ou animais - e a adotar o ponto de vista que denominamos de naturalismo - a consideração da inteligência de maquina como fenômeno natural nas maquinas, digno de ser estudado em si próprio. 0 trabalho apresenta os resultados da reflexão através da qual se tentou realizar tais objetivos. / This work introduces a definition for the notion of machine intelligence, establishes the concrete possibility of that definition and gives indications on its necessity - that is, it gives that notion an objective content and shows the interest and utility that the definition may have to computer science, in general, and artificial intelligence, in particular. Specifically, we take a particular reading of the definition of intelligence given by J. Piaget, and we establish the conditions under which that definition can be interpreted in the domain of machines. To achieve this, a revision of the fundamental notions of computer science was necessary, in order to make explicit the dynamical aspects of variability, controlability and adaptability that are underlying those concepts (machine, program, computation, and machine organization, regulation and adaptation). On the other hand, a change in the attitude toward the objetives of artificial intelligence was also required. The given definition pressuposes that one recognizes the operational autonomy of machines, and this implies abandonning the point of view we call artificialism - the search for the imitation of the intelligent behavior of human beings and animals - and adopting the point of view that we call naturalism - which considers that machine intelligence is a natural phenomenon in machines, that should be studied by its own. The work presents the results of the reflexion through which we tried to realize those goals.

Page generated in 0.157 seconds