Spelling suggestions: "subject:"computer science (mathematics)"" "subject:"coomputer science (mathematics)""
61 |
Leis de Conservação Hiperbólicas 2D com Termo Fonte Stiff /Costa, Camila Gonçalves. January 2013 (has links)
Orientador: Messias Meneguette Júnior / Banca: Gilcilene Sanchez de Paulo / Banca: Erwin Doescher / Resumo: As equações diferenciais parciais hiperbólicas tem recebido uma atenção especial nos últimos anos devido sua grande aplicabilidade em diversas áreas da ciência e pelas dificuldades numéricas que elas impõem. O presente projeto leva em conta a importância do termo fonte e as implicações que isto representa na solução numérica das equações hiperbólicas, em especial nas leis de conservação. Todo o trabalho é focado no caso bidimensional das leis de conservação hiperbólicas, considerando termos fonte stiff. Este tipo de termo fonte impõe diferença de escala de propagação das ondas e das escalas advindas do próprio termo fonte. A equação hiperbólica com termo fonte deve ser tratada de forma especial. Utilizando os métodos mais recomendados na prática, resolvemos tal equação separando-a em duas ou mais partes, e depois acoplamos as partes na solução final. Os métodos utilizados em cada parte separada tem grande influência na solução... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The hyperbolic partial differential equations has had special attention in recent years due to their wide application in various areas of science and the numerical difficulties they impose. This project takes into account the importance of the font term and the implications this represents in the numerical solution of hyperbolic equations, especially in conservation laws. All work is focused on the case of two-dimensional hyperbolic conservation laws, considering the font terms stiff. This type of font term imposes difference in scale propagation of waves and those scales that comes from the font term. The hyperbolic equation with font term should be treated in a special way. Using the methods recommended in practice, we solve this equation by separating it into two or more parts, and then put the parties together in the final solution. The methods used... (Complete abstract click electronic access below) / Mestre
|
62 |
An investigation of the use of computers in the teaching and learning of hyperbolic graphs in grades 10 to 12 mathematicsMavhungu, Lavhelani Emily 11 1900 (has links)
In this investigation an attempt was made to determine how learners and teachers use
computers in the teaching and learning of hyperbolic graphs in Mathematics. A
comprehensive literature study showed that there are many benefits in using computers
to study Mathematics. The investigation was done in two phases. In the first phase, a
questionnaire was given to learners. The second phase involved interviewing learners
and teachers. Findings indicate that learners and teachers enjoy using computers in the
teaching and learning of Mathematics. Analysis of the results shows that the use of
computers in teaching and learning of Mathematics, in particular the teaching and
learning of hyperbolic graphs is beneficial. / Mathematical Sciences / M.Sc. (Mathematics Education)
|
63 |
Abstract satisfactionHaller, Leopold Carl Robert January 2013 (has links)
This dissertation shows that satisfiability procedures are abstract interpreters. This insight provides a unified view of program analysis and satisfiability solving and enables technology transfer between the two fields. The framework underlying these developments provides systematic recipes that show how intuition from satisfiability solvers can be lifted to program analyzers, how approximation techniques from program analyzers can be integrated into satisfiability procedures and how program analyzers and satisfiability solvers can be combined. Based on this work, we have developed new tools for checking program correctness and for solving satisfiability of quantifier-free first-order formulas. These tools outperform existing approaches. We introduce abstract satisfaction, an algebraic framework for applying abstract interpre- tation to obtain sound, but potentially incomplete satisfiability procedures. The framework allows the operation of satisfiability procedures to be understood in terms of fixed point computations involving deduction and abduction transformers on lattices. It also enables satisfiability solving and program correctness to be viewed as the same algebraic problem. Using abstract satisfaction, we show that a number of satisfiability procedures can be understood as abstract interpreters, including Boolean constraint propagation, the dpll and cdcl algorithms, St ̊almarck’s procedure, the dpll(t) framework and solvers based on congruence closure and the Bellman-Ford algorithm. Our work leads to a novel understand- ing of satisfiability architectures as refinement procedures for abstract analyses and allows us to relate these procedures to independent developments in program analysis. We use this perspective to develop Abstract Conflict-Driven Clause Learning (acdcl), a rigorous, lattice-based generalization of cdcl, the central algorithm of modern satisfiability research. The acdcl framework provides a solution to the open problem of lifting cdcl to new prob- lem domains and can be instantiated over many lattices that occur in practice. We provide soundness and completeness arguments for acdcl that apply to all such instantiations. We evaluate the effectiveness of acdcl by investigating two practical instantiations: fp-acdcl, a satisfiability procedure for the first-order theory of floating point arithmetic, and cdfpl, an interval-based program analyzer that uses cdcl-style learning to improve the precision of a program analysis. fp-acdcl is faster than competing approaches in 80% of our benchmarks and it is faster by more than an order of magnitude in 60% of the benchmarks. Out of 33 safe programs, cdfpl proves 16 more programs correct than a mature interval analysis tool and can conclusively determine the presence of errors in 24 unsafe benchmarks. Compared to bounded model checking, cdfpl is on average at least 260 times faster on our benchmark set.
|
64 |
On collapsible pushdown automata, their graphs and the power of linksBroadbent, Christopher H. January 2011 (has links)
Higher-Order Pushdown Automata (HOPDA) are abstract machines equipped with a nested stacks of stacks ... of stacks of stacks. Collapsible pushdown automata (CPDA) enhance these stacks with the addition of ‘links’ emanating from atomic elements to the higher-order stacks below. For trees CPDA are equi-expressive with recursion schemes, which can be viewed as simply-typed λY terms. With vanilla HOPDA, one can only capture schemes satisfying a syntactic constraint called safety. This dissertation begins with some results concerning the significance of links in terms of recursion schemes. We introduce a fine-grained notion of safety that allows us to correlate the need for links of a given order with the imposition of safety on variables of a corresponding order. This generalises some joint work with William Blum that shows we can dispense with homogeneous types when characterising safety. We complement this result with a demonstration that homogeneity by itself does not constrain the expressivity of otherwise unrestricted recursion schemes. The main results of the dissertation, however, concern the configuration graphs of CPDA. Whilst the configuration graphs of HOPDA are well understood and have decidable MSO theories (they coincide with the Caucal hierarchy), relatively little is known about the transition graphs of CPDA. It is known that they already have undecidable MSO theories at order-2, but Kartzow recently showed that 2-CPDA graphs are tree automatic and hence first-order logic is decidable at order-2. We provide a characterisation of the decidability of first-order logic on CPDA graphs in terms of quantifier-alternation and the order of CPDA stacks and the links contained within. Whilst this characterisation is fairly comprehensive, we do leave open the question of decidability for some sub-classes of CPDA. It turns out that decidability can be highly sensitive to the order of links in a stack relative to the order of the stack itself. In addition to some strong and surprising undecidability results, we also develop further Kartzow’s work on 2-CPDA. We introduce prefix-rewrite systems for nested-words that characterise the configuration graphs of both 2-CPDA and 2-HOPDA, capturing the power of collapse precisely in terms outside of the language of CPDA. It also formalises and demonstrates the inherent asymmetry of the collapse operation. This generalises the rational prefix-rewriting systems characterising conventional pushdown graphs and we believe establishes the 2-CPDA graphs as an interesting and robust class.
|
65 |
Scalable reasoning for description logicsShearer, Robert D. C. January 2011 (has links)
Description logics (DLs) are knowledge representation formalisms with well-understood model-theoretic semantics and computational properties. The DL SROIQ provides the logical underpinning for the semantic web language OWL 2, which is quickly becoming the standard for knowledge representation on the web. A central component of most DL applications is an efficient and scalable reasoner, which provides services such as consistency testing and classification. Despite major advances in DL reasoning algorithms over the last decade, however, ontologies are still encountered in practice that cannot be handled by existing DL reasoners. We present a novel reasoning calculus for the description logic SROIQ which addresses two of the major sources of inefficiency present in the tableau-based reasoning calculi used in state-of-the-art reasoners: unnecessary nondeterminism and unnecessarily large model sizes. Further, we describe a new approach to classification which exploits partial information about the subsumption relation between concept names to reduce both the number of individual subsumption tests performed and the cost of working with large ontologies; our algorithm is applicable to the general problem of deducing a quasi-ordering from a sequence of binary comparisons. We also present techniques for extracting partial information about the subsumption relation from the models generated by constructive DL reasoning methods, such as our hypertableau calculus. Empirical results from a prototypical implementation demonstrate substantial performance improvements compared to existing algorithms and implementations.
|
66 |
The safe lambda calculusBlum, William January 2009 (has links)
We consider a syntactic restriction for higher-order grammars called safety that constrains occurrences of variables in the production rules according to their type-theoretic order. We transpose and generalize this restriction to the setting of the simply-typed lambda calculus, giving rise to what we call the safe lambda calculus. We analyze its expressivity and obtain a result in the same vein as Schwichtenberg's 1976 characterization of the simply-typed lambda calculus: the numeric functions representable in the safe lambda calculus are exactly the multivariate polynomials; thus conditional is not definable. We also give a similar characterization for representable word functions. We then examine the complexity of deciding beta-eta equality of two safe simply-typed terms and show that this problem is PSPACE-hard. The safety restriction is then extended to other applied lambda calculi featuring recursion and references such as PCF and Idealized Algol (IA for short). The next contribution concerns game semantics. We introduce a new concrete presentation of this semantics using the theory of traversals. It is shown that the revealed game denotation of a term can be computed by traversing some souped-up version of the term's abstract syntax tree using adequately defined traversal rules. Based on this presentation and via syntactic reasoning we obtain a game-semantic interpretation of safety: the strategy denotations of safe lambda-terms satisfy a property called P-incremental justification which says that the player's moves are always justified by the last pending opponent's move of greater order occurring in the player's view. Next we look at models of the safe lambda calculus. We show that these are precisely captured by Incremental Closed Categories. A game model is constructed and is shown to be fully abstract for safe IA. Further, it is effectively presentable: two terms are equivalent just if they have the same set of complete O-incrementally justified plays---where O-incremental justification is defined as the dual of P-incremental justification. Finally we study safety from the point of view of algorithmic game semantics. We observe that in the third-order fragment of IA, the addition of unsafe contexts is conservative for observational equivalence. This implies that all the upper complexity bounds known for the lower-order fragments of IA also hold for the safe fragment; we show that the lower-bounds remain the same as well. At order 4, observational equivalence is known to be undecidable for IA. We conjecture that for the order-4 safe fragment of IA, the problem is reducible to the DPDA-equivalence problem and is thus decidable.
|
67 |
A model-independent theory of computational complexity : from patience to precision and beyondBlakey, Edward William January 2010 (has links)
The field of computational complexity theory--which chiefly aims to quantify the difficulty encountered when performing calculations--is, in the case of conventional computers, correctly practised and well understood (some important and fundamental open questions notwithstanding); however, such understanding is, we argue, lacking when unconventional paradigms are considered. As an illustration, we present here an analogue computer that performs the task of natural-number factorization using only polynomial time and space; the system's true, exponential complexity, which arises from requirements concerning precision, is overlooked by a traditional, `time-and-space' approach to complexity theory. Hence, we formulate the thesis that unconventional computers warrant unconventional complexity analysis; the crucial omission from traditional analysis, we suggest, is consideration of relevant resources, these being not only time and space, but also precision, energy, etc. In the presence of this multitude of resources, however, the task of comparing computers' efficiency (formerly a case merely of comparing time complexity) becomes difficult. We resolve this by introducing a notion of overall complexity, though this transpires to be incompatible with an unrestricted formulation of resource; accordingly, we define normality of resource, and stipulate that considered resources be normal, so as to rectify certain undesirable complexity behaviour. Our concept of overall complexity induces corresponding complexity classes, and we prove theorems concerning, for example, the inclusions therebetween. Our notions of resource, overall complexity, normality, etc. form a model-independent framework of computational complexity theory, which allows: insightful complexity analysis of unconventional computers; comparison of large, model-heterogeneous sets of computers, and correspondingly improved bounds upon the complexity of problems; assessment of novel, unconventional systems against existing, Turing-machine benchmarks; increased confidence in the difficulty of problems; etc. We apply notions of the framework to existing disputes in the literature, and consider in the context of the framework various fundamental questions concerning the nature of computation.
|
68 |
Static analyses over weak memoryNimal, Vincent P. J. January 2014 (has links)
Writing concurrent programs with shared memory is often not trivial. Correctly synchronising the threads and handling the non-determinism of executions require a good understanding of the interleaving semantics. Yet, interleavings are not sufficient to model correctly the executions of modern, multicore processors. These executions follow rules that are weaker than those observed by the interleavings, often leading to reorderings in the sequence of updates and readings from memory; the executions are subject to a weaker memory consistency. Reorderings can produce executions that would not be observable with interleavings, and these possible executions also depend on the architecture that the processors implement. It is therefore necessary to locate and understand these reorderings in the context of a program running, or to prevent them in an automated way. In this dissertation, we aim to automate the reasoning behind weak memory consistency and perform transformations over the code so that developers need not to consider all the specifics of the processors when writing concurrent programs. We claim that we can do automatic static analysis for axiomatically-defined weak memory models. The method that we designed also allows re-use of automated verification tools like model checkers or abstract interpreters that were not designed for weak memory consistency, by modification of the input programs. We define an abstraction in detail that allows us to reason statically about weak memory models over programs. We locate the parts of the code where the semantics could be affected by the weak memory consistency. We then provide a method to explicitly reveal the resulting reorderings so that usual verification techniques can handle the program semantics under a weaker memory consistency. We finally provide a technique that synthesises synchronisations so that the program would behave as if only interleavings were allowed. We finally test these approaches on artificial and real software. We justify our choice of an axiomatic model with the scalability of the approach and the runtime performance of the programs modified by our method.
|
69 |
Logical aspects of quantum computationMarsden, Daniel January 2015 (has links)
A fundamental component of theoretical computer science is the application of logic. Logic provides the formalisms by which we can model and reason about computational questions, and novel computational features provide new directions for the development of logic. From this perspective, the unusual features of quantum computation present both challenges and opportunities for computer science. Our existing logical techniques must be extended and adapted to appropriately model quantum phenomena, stimulating many new theoretical developments. At the same time, tools developed with quantum applications in mind often prove effective in other areas of logic and computer science. In this thesis we explore logical aspects of this fruitful source of ideas, with category theory as our unifying framework. Inspired by the success of diagrammatic techniques in quantum foundations, we begin by demonstrating the effectiveness of string diagrams for practical calculations in category theory. We proceed by example, developing graphical formulations of the definitions and proofs of many topics in elementary category theory, such as adjunctions, monads, distributive laws, representable functors and limits and colimits. We contend that these tools are particularly suitable for calculations in the field of coalgebra, and continue to demonstrate the use of string diagrams in the remainder of the thesis. Our coalgebraic studies commence in chapter 3, in which we present an elementary formulation of a representation result for the unitary transformations, following work developed in a fibrational setting in [Abramsky, 2010]. That paper raises the question of what a suitable "fibred coalgebraic logic" would be. This question is the starting point for our work in chapter 5, in which we introduce a parameterized, duality based frame- work for coalgebraic logic. We show sufficient conditions under which dual adjunctions and equivalences can be lifted to fibrations of (co)algebras. We also prove that the semantics of these logics satisfy certain "institution conditions" providing harmony between syntactic and semantic transformations. We conclude by studying the impact of parameterization on another logical aspect of coalgebras, in which certain fibrations of predicates can be seen as generalized invariants. Our focus is on the lifting of coalgebra structure along a fibration from the base category to an associated total category of predicates. We show that given a suitable parameterized generalization of the usual liftings of signature functors, this induces a "fibration of fibrations" capturing the relationship between the two different axes of variation.
|
70 |
Transformations of representation in constraint satisfactionSalamon, András Z. January 2013 (has links)
In this thesis I study constraint satisfaction problems or CSPs. These require determining whether values can be assigned to variables so that all constraints are satisfied. An important challenge is to identify tractable CSPs which can be solved efficiently. CSP instances have usually been grouped together by restricting either the allowed combinations of values, or the way the variables are allowed to interact. Such restrictions sometimes yield tractable CSPs. A weakness of this method is that it cannot explain why all-different constraints form a tractable CSP. In this common type of constraint, all variables must be assigned values that are different from each other. New techniques are therefore needed to explain why such CSPs can be solved efficiently. My main contribution is an investigation of such hybrid CSPs which cannot be defined with either one of these kinds of restrictions. The main technique I use is a transformation of a CSP instance to the microstructure representation. This represents an instance as a collection of sets, and a solution of the instance corresponds to an independent set in the clause structure. For the common case where all constraints involve only two variables, I show how the microstructure can be used to define CSPs that are tractable because their clause structures fall within classes of graphs for which an independent set of specified size can be found efficiently. Such tractable hereditary classes are defined by using the technique of excluded induced subgraphs, such as classes of graphs that contain neither odd cycles with five or more vertices, nor their complements. I also develop finer grained techniques, by allowing vertices of the microstructure representation to be assigned colours, and the variables to be ordered. I show that these techniques define a new tractable CSP that forbids an ordered vertex-coloured subgraph in the microstructure representation.
|
Page generated in 0.1196 seconds