161 |
Student Approaches to Combinatorial Enumeration: The Role of Set-Oriented ThinkingLockwood, Elise Nicole 01 January 2011 (has links)
Combinatorics is a growing topic in mathematics with widespread applications in a variety of fields. Because of this, it has become increasingly prominent in both K-12 and undergraduate curricula. There is a clear need in mathematics education for studies that address cognitive and pedagogical issues surrounding combinatorics, particularly related to students' conceptions of combinatorial ideas. In this study, I describe my investigation of students' thinking as it relates to counting problems. I interviewed a number of post-secondary students as they solved a variety of combinatorial tasks, and through the analysis of this data I defined and elaborated a construct that I call set-oriented thinking. I describe and categorize ways in which students used set-oriented thinking in their counting, and I put forth a model for relationships between the formulas/expressions, the counting processes, and the sets of outcomes that are involved in students' counting activity.
|
162 |
The cost of terminating parallel discrete-event simulationsSanjeevan, Vasant 29 September 2009 (has links)
Simulation models use many different rules to decide when to terminate. Parallel simulations generally use a single, simple rule: each process comprising the simulation terminates after a predefined period of time. A number of parallel simulation protocols have been proposed that enforce constraints on the order in which processes are scheduled in parallel so that the result of a parallel simulation is the same as that of the corresponding sequential simulation. Parallel simulations protocols can be broadly classified into two categories: conservative and optimistic. Conservative protocols can be subclassified into synchronous and asynchronous protocols. In this thesis, our objective is to compare the predicted and measured wall clock running times of parallel simulations for conservative-synchronous and optimistic protocols with and without termination conditions.
We propose eight algorithms for mechanically adding an arbitrary termination condition to a conservative-synchronous non-terminating parallel simulation. Informal arguments about the expected performance of each algorithm are made, and the arguments are confirmed through measurement of the simulation of a torus network with three termination conditions using the conservative-synchronous Bounded Lag protocol on a shared memory multiprocessor. We also propose four algorithms for mechanically adding a termination condition to an optimistic non-terminating parallel simulation. We make informal arguments about the expected performance of these algorithms and report on the actual performance of the simulation of the torus network benchmark with two of these algorithms and the same three termination conditions using the optimistic Time Warp protocol on a message-passing multiprocessor. In addition to the torus network benchmark for the optimistic protocol, we also report on the performance of a colliding pucks simulation with these two algorithms and three additional termination conditions.
Our study indicates that termination conditions which require exhaustive evaluation introduce substantial running time overhead. We propose and evaluate a scheme to reduce this overhead. / Master of Science
|
163 |
The influence of the use of computers in the teaching and learning of functions in school mathematicsGebrekal, Zeslassie Melake 30 November 2007 (has links)
The aim of the study was to investigate what influence the use of computers using MS Excel and RJS Graph software has on grade 11 Eritrean students' understanding of functions in the learning of mathematics. An empirical investigation using quantitative and qualitative research methods was carried out. A pre-test (task 1) and a post-test (task 2), a questionnaire and an interview schedule were used to collect data.
Two randomly selected sample groups (i.e. experimental and control groups) of students were involved in the study. The experimental group learned the concepts of functions, particularly quadratic functions using computers. The control group learned the same concepts through the traditional paper-pencil method.
The results indicated that the use of computers has a positive impact on students' understanding of functions as reflected in their achievement, problem-solving skills, motivation, attitude and the classroom environment. / Educational Studies / M. Ed. (Math Education)
|
164 |
Automatic verification of competitive stochastic systemsSimaitis, Aistis January 2014 (has links)
In this thesis we present a framework for automatic formal analysis of competitive stochastic systems, such as sensor networks, decentralised resource management schemes or distributed user-centric environments. We model such systems as stochastic multi-player games, which are turn-based models where an action in each state is chosen by one of the players or according to a probability distribution. The specifications, such as “sensors 1 and 2 can collaborate to detect the target with probability 1, no matter what other sensors in the network do” or “the controller can ensure that the energy used is less than 75 mJ, and the algorithm terminates with probability at least 0.5'', are provided as temporal logic formulae. We introduce a branching-time temporal logic rPATL and its multi-objective extension to specify such probabilistic and reward-based properties of stochastic multi-player games. We also provide algorithms for these logics that can either verify such properties against the model, providing a yes/no answer, or perform strategy synthesis by constructing the strategy for the players that satisfies the specification. We conduct a detailed complexity analysis of the model checking problem for rPATL and its multi-objective extension and provide efficient algorithms for verification and strategy synthesis. We also implement the proposed techniques in the PRISM-games tool and apply them to the analysis of several case studies of competitive stochastic systems.
|
165 |
Repairing strings and treesRiveros Jaeger, Cristian January 2013 (has links)
What do you do if a computational object fails a specification? An obvious approach is to repair it, namely, to modify the object minimally to get something that satisfies the constraints. In this thesis we study foundational problems of repairing regular specifications over strings and trees. Given two regular specifications R and T we aim to understand how difficult it is to transform an object satisfying R into an object satisfying T. The setting is motivated by considering R to be a restriction -- a constraint that the input object is guaranteed to satisfy -- while T is a target -- a constraint that we want to enforce. We first study which pairs of restriction and target specifications can be repaired with a ``small'' numbers of changes. We formalize this as the bounded repair problem -- to determine whether one can repair each object satisfying R into T with a uniform number of edits. We provide effective characterizations of the bounded repair problem for regular specifications over strings and trees. These characterizations are based on a good understanding of the cyclic behaviour of finite automata. By exploiting these characterizations, we give optimal algorithms to decide whether two specifications are bounded repairable or not. We also consider the impact of limitations on the editing process -- what happens when we require the repair to be done sequentially over serialized objects. We study the bounded repair problem over strings and trees restricted to this streaming setting and show that this variant can be characterized in terms of finite games. Furthermore, we use this characterization to decide whether one can repair a pair of specifications in a streaming fashion with bounded cost and how to obtain a streaming repair strategy in this case. The previous notion asks for a uniform bound on the number of edits, but having this property is a strong requirement. To overcome this limitation, we study how to calculate the maximum number of edits per character needed to repair any object in R into T. We formalize this as the asymptotic cost -- the limit of the number of edits divided by the length of the input in the worst case. Our contribution is an algorithm to compute the asymptotic cost for any pair of regular specifications over strings. We also consider the streaming variant of this cost and we show how to compute it by reducing this problem to mean-payoff games.
|
166 |
Category-theoretic quantitative compositional distributional models of natural language semanticsGrefenstette, Edward Thomas January 2013 (has links)
This thesis is about the problem of compositionality in distributional semantics. Distributional semantics presupposes that the meanings of words are a function of their occurrences in textual contexts. It models words as distributions over these contexts and represents them as vectors in high dimensional spaces. The problem of compositionality for such models concerns itself with how to produce distributional representations for larger units of text (such as a verb and its arguments) by composing the distributional representations of smaller units of text (such as individual words). This thesis focuses on a particular approach to this compositionality problem, namely using the categorical framework developed by Coecke, Sadrzadeh, and Clark, which combines syntactic analysis formalisms with distributional semantic representations of meaning to produce syntactically motivated composition operations. This thesis shows how this approach can be theoretically extended and practically implemented to produce concrete compositional distributional models of natural language semantics. It furthermore demonstrates that such models can perform on par with, or better than, other competing approaches in the field of natural language processing. There are three principal contributions to computational linguistics in this thesis. The first is to extend the DisCoCat framework on the syntactic front and semantic front, incorporating a number of syntactic analysis formalisms and providing learning procedures allowing for the generation of concrete compositional distributional models. The second contribution is to evaluate the models developed from the procedures presented here, showing that they outperform other compositional distributional models present in the literature. The third contribution is to show how using category theory to solve linguistic problems forms a sound basis for research, illustrated by examples of work on this topic, that also suggest directions for future research.
|
167 |
Program analysis with interpolantsWeissenbacher, Georg January 2010 (has links)
This dissertation discusses novel techniques for interpolation-based software model checking, an approximate method which uses Craig interpolation to compute invariants of programs. Our work addresses two aspects of program analyses based on model checking: verification (the construction of correctness proofs for programs) and falsification (the detection of counterexamples that violate the specification). In Hoare's calculus, a proof of correctness comprises assertions which establish that a program adheres to its specification. The principal challenge is to derive appropriate assertions and loop invariants. Contemporary software verification tools use Craig interpolation (as opposed to traditional predicate transformers such as the weakest precondition) to derive approximate assertions. The performance of the model checker is contingent on the Craig interpolants computed. We present novel interpolation techniques which provide the following advantages over existing methods. Firstly, the resulting interpolants are sound with respect to the bit-level semantics of programs, which is an improvement over interpolation systems that use linear arithmetic over the reals to approximate bit-vector arithmetic and/or do not support bit-level operations. Secondly, our interpolation systems afford us a choice of interpolants and enable us to fine-tune their logical strength and structure. In contrast, existing procedures are limited to a single ad-hoc choice of an interpolant. Interpolation-based verification tools are typically forced to refine an initial approximation repeatedly in order to achieve the accuracy required to establish or refute the correctness of a program. The detection of a counterexample containing a repetitive construct may necessitate one refinement step (involving the computation of additional interpolants) for each iteration of the loop. We present a heuristic that aims to avoid the repeated and computationally expensive construction of interpolants, thus enabling the detection of deeply buried defects such as buffer overflows. Finally, we present an implementation of our techniques and evaluate them on a set of standardised device driver and buffer overflow benchmarks.
|
168 |
The complexity and expressive power of valued constraintsZivny, Stanislav January 2009 (has links)
This thesis is a detailed examination of the expressive power of valued constraints and related complexity questions. The valued constraint satisfaction problem (VCSP) is a generalisation of the constraint satisfaction problem which allows to describe a variety of combinatorial optimisation problems. Although most results are stated in this framework, they can be interpreted equivalently in the framework of, for instance, pseudo-Boolean polynomials, Gibbs energy minimisation, or Markov Random Fields. We take a result of Cohen, Cooper and Jeavons that characterises the expressive power of valued constraint in terms of certain algebraic properties, and extend this result by showing yet another connection between the expressive power of valued constraints and linear programming. We prove a decidability result for fractional clones. We consider various classes of valued constraints and the associated cost functions with respect to the question of which of these classes can be expressed using only cost functions of bounded arities. We identify the first known example of an infinite chain of classes of constraints with strictly increasing expressive power. We present a full classification of various classes of constraints with respect to this problem. We study submodular constraints and cost functions. Submodular functions play a key role in combinatorial optimisation and are often considered to be a discrete analogue of convex functions. It has previously been an open problem whether all Boolean submodular cost functions can be decomposed into a sum of binary submodular cost functions over a possibly larger set of variables. This problem has been considered within several different contexts in computer science, including computer vision, artificial intelligence, and pseudo-Boolean optimisation. Using a connection between the expressive power of valued constraints and certain algebraic properties of cost functions, we answer this question negatively. Our results have several corollaries. First, we characterise precisely which submodular polynomials of degree 4 can be expressed by quadratic submodular polynomials. Next, we identify a novel class of submodular functions of arbitrary arities that can be expressed by binary submodular functions, and therefore minimised efficiently using a so-called expressibility reduction to the (s,t)-Min-Cut problem. More importantly, our results imply limitations on this kind of reduction and establish for the first time that it cannot be used in general to minimise arbitrary submodular functions. Finally, we refute a conjecture of Promislow and Young on the structure of the extreme rays of the cone of Boolean submodular functions.
|
169 |
Stream fusion : practical shortcut fusion for coinductive sequence typesCoutts, Duncan January 2011 (has links)
In functional programming it is common practice to build modular programs by composing functions where the intermediate values are data structures such as lists or arrays. A desirable optimisation for programs written in this style is to fuse the composed functions and thereby eliminate the intermediate data structures and their associated runtime costs. Stream fusion is one such fusion optimisation that can eliminate intermediate data structures, including lists, arrays and other abstract data types that can be viewed as coinductive sequences. The fusion transformation can be applied fully automatically by a general purpose optimising compiler. The stream fusion technique itself has been presented previously and many practical implementations exist. The primary contributions of this thesis address the issues of correctness and optimisation: whether the transformation is correct and whether the transformation is an optimisation. Proofs of shortcut fusion laws have typically relied on parametricity by making use of free theorems. Unfortunately, most functional programming languages have semantics for which classical free theorems do not hold unconditionally; additional side conditions are required. In this thesis we take an approach based not on parametricity but on data abstraction. Using this approach we prove the correctness of stream fusion for lists -- encompassing the fusion system as a whole, not merely the central fusion law. We generalise this proof to give a framework for proving the correctness of stream fusion for any abstract data type that can be viewed as a coinductive sequence and give as an instance of the framework, a simple model of arrays. The framework requires that each fusible function satisfies a simple data abstraction property. We give proofs of this property for several standard list functions. Previous empirical work has demonstrated that stream fusion can be an optimisation in many cases. In this thesis we take a more universal view and consider the issue of optimisation independently of any particular implementation or compiler. We make a semi-formal argument that, subject to certain syntactic conditions on fusible functions, stream fusion on lists is strictly an improvement, as measured by the number of allocations of data constructors. This detailed analysis of how stream fusion works may be of use in writing fusible functions or in developing new implementations of stream fusion.
|
170 |
Formal relationships in sequential object systemsKerfoot, Eric D. January 2010 (has links)
Formal specifications describe the behaviour of object-oriented systems precisely, with the intent to capture all properties necessary for correctness. Relationships between objects, and in a broader sense the relationship between whole components, may not be adequately captured by specifications. One critical component of specifications having a role in relationships are invariants which define a constraint between multiple objects. If an object's invariant relies on external objects for its conditions, correct operations which abide by their specifications modifying these external objects may violate the constraint. Such an invariant defines a relationship between multiple objects which is unsound since it does not adequately describe the responsibilities which the objects in the relationship have to each other. The root cause of this correctness loophole is the failure of specifications to capture such relationships adequately as well as their correctness requirements. This thesis addresses this shortcoming in a number of ways, both for individual objects in a sequential environment, and between concurrent components which are defined as specialized object types. The proposed Colleague Technique [29] defines sound invariants between two object types using classical Design-by-Contract [35] methodologies. Additional invariant conditions introduced through the technique ensure that no correct operation may produce a post-state which does not satisfy all invariants satisfied by the pre-state. Relationships between objects, as well as their correct specification and management, are the subjects of this thesis. Those relationships between objects which can be described by invariants are made sound with the Colleague Technique, or the lightweight ownership type system that accompanies it. Behavioural correctness beyond these can be addressed with specifications in a similar manner to sequential systems without concurrency, in particular with the use of runtime assertion checking [11].
|
Page generated in 0.1433 seconds