91 |
The first order theory of a dense pair and a discrete groupKhani, Mohsen January 2013 (has links)
In this thesis we have shown that a seemingly complicated mathematical structure can exhibit 'tame behaviour'. The structure we have dealt with is a field (a space in which there are addition and multiplication which satisfy natural properties) together with a dense subset (a subset which has spread in all parts of the this set, as Q does in R) and a discrete subset (a subset comprised of single points which keep certain distances from one another). This tameness is essentially with regards to not being trapped with the 'Godel phenomeonon' as the Peano arithmetic does.
|
92 |
Quantitative decision making in reverse logistics networks with uncertainty and quality of returns considerationsNiknejad, A. January 2014 (has links)
Quantitative modelling of reverse logistics networks and product recovery have been the focus of many research activities in the past few decades. Interest to these models are mostly due to the complexity of reverse logistics networks that necessitates further analysis with the help of mathematical models. In comparison to the traditional forward logistics networks, reverse logistics networks have to deal with the quality of returns issues as well as a high degree of uncertainty in return flow. Additionally, a variety of recovery routes, such as reuse, repair, remanufacturing and recycling, exist. The decision making for utilising these routes requires the quality of returns and uncertainty of return flow to be considered. In this research, integrated forward and reverse logistics networks with repair, remanufacturing and disposal routes are considered. Returns are assumed to be classified based on their quality in ordinal quality levels and quality thresholds are used to split the returned products into repairable, remanufacturable and disposable returns. Fuzzy numbers are used to model the uncertainty in demand and return quantities of different quality levels. Setup costs, non-stationary demand and return quantities, and different lead times have been considered. To facilitate decision making in such networks, a two phase optimisation model is proposed. Given quality thresholds as parameters, the decision variables including the quantities of products being sent to repair, disassembly and disposal, components to be procured and products to be repaired, disassembled or produced for each time period within the time horizon are determined using a fuzzy optimisation model. A sensitivity analysis of the fuzzy optimisation model is carried out on the network parameters including quantity of returned products, unit repair an disassembly costs and procurement, production, disassembly and repair setup costs. A fuzzy controller is proposed to determine quality thresholds based on some ratios of the reverse logistics network parameters including repair to new unit cost, disassembly to new unit cost, repair to disassembly setup, disassembly to procurement setup and return to demand ratios. Fuzzy controller’s sensitivity is also examined in relation to parameters such as average repair and disassembly costs, repair, disassembly, production and procurement setup costs and return to demand ratio. Finally, a genetic fuzzy method is developed to tune the fuzzy controller and improve its rule base. The rule base obtained and the results of sensitivity analyses are utilised to gain better managerial insights into these reverse logistics networks.
|
93 |
Evaluating novel pedagogy in higher education : a case study of e-proofsRoy, Somali January 2014 (has links)
This thesis is a single case study of the introduction and evaluation of new resources and new technologies in higher education; in which e-Proof was chosen as a single case. E-proofs are a multimedia representation of proofs, were created by Alcock (2009), and aimed to help undergraduates to read proofs for better proof comprehension. My thesis aimed to investigate whether the impact of reading such technology-based resource, e-Proofs, on undergraduates proof comprehension was better compared to reading written textbook proofs and if so, then why (or why not). To evaluate the effectiveness of e-Proofs, I used both qualitative and quantitative methods. First I measured undergraduates satisfaction, which is a most common research practice in evaluation studies, by using self-reporting methods such as web-based survey and interviews. A web-based survey and focus-group interviews showed that undergraduates liked to have e-Proofs and they believed that e-Proofs had positive impact on their proof comprehension. However, their positive views on e-Proofs did not evidence the educational impact of e-Proofs. I conducted an interview with Alcock for better understanding of her intentions of creating e-Proof and her expectations from it. Next, I conducted the first experiment which compared the impact of reading an e-Proof with a written textbook proof on undergraduates proof comprehension. Their comprehension was measured with an open-ended comprehension test twice immediately after reading the proof and after two weeks. I found that the immediate impact of reading an e-Proof and a textbook proof were essentially the same, however the long term impact of reading an e-Proof was worse than reading a textbook proof (for both high and low achieving undergraduates). This leads to the second experiment in which I investigated how undergraduates read e-Proofs and textbook proofs. In the second experiment, participants eye-movements were recorded while read- ing proofs, to explore their reading comprehension processes. This eye-tracking experiment showed that undergraduates had a sense of understanding of how to read a proof without any additional help. Rather, additional help allowed them to take a back seat and to devote less cognitive effort than they would otherwise. Moreover, e-Proofs altered undergraduates reading behaviours in a way which can harm learning. In sum, this thesis contributes knowledge into the area of reading and compre- hending proofs at undergraduate level and presents a methodology for evaluation studies of new pedagogical tools.
|
94 |
Quantifying sources of variation in multi-model ensembles : a process-based approachSessford, Patrick Denis January 2015 (has links)
The representation of physical processes by a climate model depends on its structure, numerical schemes, physical parameterizations and resolution, with initial conditions and future emission scenarios further affecting the output. The extent to which climate models agree is therefore of great interest, often with greater confidence in robust results across models. This has led to climate model output being analysed as ensembles rather than in isolation, and quantifying the sources of variation across these ensembles is the aim of many recent studies. Statistical attempts to do this include the use of variants of the mixed-effects analysis of variance or covariance (mixed-effects ANOVA/ANCOVA). This work usually focuses on identifying variation in a variable of interest that is due to differences in model structure, carbon emissions scenario, etc. Quantifying such variation is important in determining where models agree or disagree, but further statistical approaches can be used to diagnose the reasons behind the agreements and disagreements by representing the physical processes within the climate models. A process-based approach is presented that uses simulation with statistical models to perform a global sensitivity analysis and quantify the sources of variation in multi-model ensembles. This approach is a general framework that can be used with any generalised linear mixed model (GLMM), which makes it applicable to use with statistical models designed to represent (sometimes complex) physical relationships within different climate models. The method decomposes the variation in the response variable into variation due to 1) temporal variation in the driving variables, 2) variation across ensemble members in the distributions of the driving variables, 3) variation across ensemble members in the relationship between the response and the driving variables, and 4) variation unexplained by the driving variables. The method is used to quantify the extent to which, and diagnose why, precipitation varies across and within the members of two different climate model ensembles on various different spatial and temporal scales. Change in temperature in response to increased CO2 is related to change in global-mean annual-mean precipitation in a multi-model ensemble of general circulation models (GCMs). A total of 46% of the variation in the change in precipitation in the ensemble is found to be due to the differences between the GCMs, largely because the distribution of the changes in temperature varies greatly across different GCMs. The total variation in the annual-mean change in precipitation that is due to the differences between the GCMs depends on the area over which the precipitation is averaged, and can be as high as 63%. The second climate model ensemble is a perturbed physics ensemble using a regional climate model (RCM). This ensemble is used for three different applications. Firstly, by using lapse rate, saturation specific humidity and relative humidity as drivers of daily-total summer convective precipitation at the grid-point level over southern Britain, up to 8% of the variation in the convective precipitation is found to be due to the uncertainty in RCM parameters. This is largely because given atmospheric conditions lead to different rates of precipitation in different ensemble members. This could not be detected by analysing only the variation across the ensemble members in mean precipitation rate (precipitation bias). Secondly, summer-total precipitation at the grid-point level over the British Isles is used to show how the values of the RCM parameters can be incorporated into a GLMM to quantify the variation in precipitation due to perturbing each individual RCM parameter. Substantial spatial variation is found in the effect on precipitation of perturbing different RCM parameters. Thirdly, the method is extended to focus on extreme events, and the simulation of extreme winter pentad (five-day mean) precipitation events averaged over the British Isles is found to be robust to the uncertainty in RCM parameters.
|
95 |
On Galois correspondences in formal logicYim, Austin Vincent January 2012 (has links)
This thesis examines two approaches to Galois correspondences in formal logic. A standard result of classical first-order model theory is the observation that models of L-theories with a weak form of elimination of imaginaries hold a correspondence between their substructures and automorphism groups defined on them. This work applies the resultant framework to explore the practical consequences of a model-theoretic Galois theory with respect to certain first-order L-theories. The framework is also used to motivate an examination of its underlying model-theoretic foundations. The model-theoretic Galois theory of pure fields and valued fields is compared to the algebraic Galois theory of pure and valued fields to point out differences that may hold between them. The framework of this logical Galois correspondence is also applied to the theory of pseudoexponentiation to obtain a sketch of the Galois theory of exponential fields, where the fixed substructure of the complex pseudoexponential field B is an exponential field with the field Qrab as its algebraic subfield. This work obtains a partial exponential analogue to the Kronecker-Weber theorem by describing the pure field-theoretic abelian extensions of Qrab, expanding upon work in the twelfth of Hilbert’s problems. This result is then used to determine some of the model-theoretic abelian extensions of the fixed substructure of B. This work also incorporates the principles required of this model-theoretic framework in order to develop a model theory over substructural logics which is capable of expressing this Galois correspondence. A formal semantics is developed for quantified predicate substructural logics based on algebraic models for their propositional or nonquantified fragments. This semantics is then used to develop substructural forms of standard results in classical first-order model theory. This work then uses this substructural model theory to demonstrate the Galois correspondence that substructural first-order theories can carry in certain situations.
|
96 |
Petri nets, probability and event structuresGhahremani Azghandi, Nargess January 2014 (has links)
Models of true concurrency have gained a lot of interest over the last decades as models of concurrent or distributed systems which avoid the well-known problem of state space explosion of the interleaving models. In this thesis, we study such models from two perspectives. Firstly, we study the relation between Petri nets and stable event structures. Petri nets can be considered as one of the most general and perhaps wide-spread models of true concurrency. Event structures on the other hand, are simpler models of true concurrency with explicit causality and conflict relations. Stable event structures expand the class of event structures by allowing events to be enabled in more than one way. While the relation between Petri nets and event structures is well understood, the relation between Petri nets and stable event structures has not been studied explicitly. We define a new and more compact unfoldings of safe Petri nets which is directly translatable to stable event structures. In addition, the notion of complete finite prefix is defined for compact unfoldings, making the existing model checking algorithms applicable to them. We present algorithms for constructing the compact unfoldings and their complete finite prefix. Secondly, we study probabilistic models of true concurrency. We extend the definition of probabilistic event structures as defined by Abbes and Benveniste to a newly defined class of stable event structures, namely, jump-free stable event structures arising from Petri nets (characterised and referred to as net-driven). This requires defining the fundamental concept of branching cells in probabilistic event structures, for jump-free net-driven stable event structures, and by proving the existence of an isomorphism among the branching cells of these systems, we show that the latter benefit from the related results of the former models. We then move on to defining a probabilistic logic over probabilistic event structures (PESL). To our best knowledge, this is the first probabilistic logic of true concurrency. We show examples of expressivity achieved by PESL, which in particular include properties related to synchronisation in the system. This is followed by the model checking algorithm for PESL for finite event structures. Finally, we present a logic over stable event structures (SEL) along with an account of its expressivity and its model checking algorithm for finite stable event structures.
|
97 |
2-tactics in the Choquet game, and the filter dichotomyLupton, Richard J. January 2014 (has links)
This thesis is comprised of two parts. The first part (Chapter 1) concerns a problem born from descriptive set theory, and is motivated by the desire to understand a particular topological game, in the presence of examples which exhibit interesting (arguably counterintuitive) behaviour. The work develops an understanding of the limits of this behaviour, and indicates where one might look for interesting examples. While set-theoretic techniques may appear in the analysis here, the focus is mostly topological. The second part (Chapter 2) is much more set-theoretic. The work there developed from an interest in characterising closed subsets of βω, and focuses on developing tools which are promising for generalising the current relative consistency proofs of a duality principle for closed subsets of βωω. Knowledge of forcing comparable to that contained in Kunen and Jech is assumed. The appendices contain results which are worthy of inclusion, and help provide additional perspective on the rest of the text. The results there are probably all already known, if not all recorded in the literature.
|
98 |
Application of an automatically designed fuzzy logic decision support system to connection admission control in ATM networksNatario Romalho, Maria Fernanda January 1996 (has links)
No description available.
|
99 |
Εφαρμογή των κινητικών δομών δεδομένων σε προβλήματα της υπολογιστικής γεωμετρίαςΤσιμά, Αλεξάνδρα 29 August 2008 (has links)
Οι κινητικές δομές δεδομένων KDSs (kinetic data structures) είναι ένα νέο
πλαίσιο εργασίας για το σχεδιασμό και την ανάλυση αλγορίθμων σχετικών με γεωμε-
τρικά αντικείμενα (ευθύγραμμα τμήματα, πολύγωνα, δίσκοι κ.τ.λ.) σε κίνηση. Σκο-
πός μας είναι να διατηρήσουμε ένα χαρακτηριστικό ενός συνόλου κινούμενων αντι-
κειμένων, π.χ. την κυρτή θήκη ή το κοντινότερο ζευγάρι του. Η διατήρηση του χαρα
κτηριστικού γίνεται μέσω ενός συνόλου συνθηκών που εγγυώνται την εγκυρότητα
της δομής κάθε χρονική στιγμή και το οποίο μεταβάλλεται με το χρόνο λόγω της κίνησης. Οι συνθήκες αποθηκεύονται σε μια ουρά διατεταγμένες χρονολογικά. Κάθε
φορά που αλλάζει το χαρακτηριστικό που μας ενδιαφέρει ενημερώνουμε τη δομή μας
και την ουρά.
Η πρώτη ενότητα της εργασίας είναι μια εισαγωγή στις KDSs. Αναφέρουμε
βασικές έννοιες και ιδέες των KDSs όπως: συνάρτηση διαμόρφωσης, πιστοποιητικά,
κρίσιμα γεγονότα. Επίσης, ασχολούμαστε και με τα μέτρα απόδοσής τους.
Στη δεύτερη ενότητα ασχολούμαστε με τους δυαδικούς διαχωρισμούς χώρου
BSPs, πρώτα σε στατικό και κατόπιν σε κινητικό περιβάλλον. Συγκεκριμένα παρουσιάζουμε τρεις αλγορίθμους για τη διατήρηση του BSP ενός συνόλου κινούμενων
ευθυγράμμων τμημάτων στο επίπεδο. Σύμφωνα με τον πρώτο γνωστό αλγόριθμο που
διατυπώθηκε για την αποτελεσματική διατήρηση του BSP για ένα σύνολο μη-τεμνόμενων ευθυγράμμων τμημάτων S στο επίπεδο χρησιμοποιώντας τη φιλοσοφία
των KDSs, κατασκευάζουμε έναν BSP για το S θεωρώντας τα ευθύγραμμα τμήματα
στάσιμα και στη συνέχεια τον διατηρούμε καθώς αυτά κινούνται. Ο δεύτερος αλγόριθμος είναι ουσιαστικά μια επέκταση του πρώτου καθώς ασχολείται με το ίδιο πρόβλημα, αλλά για τεμνόμενα ευθύγραμμα τμήματα. Αλλάζει το σύνολο των πιστοποιητικών και οι τρόποι με τους οποίους μπορεί να αλλάξει η δομή του BSP. Ο τρίτος αλγόριθμος χρησιμοποιεί ένα διαφορετικό τρόπο για την κατασκευή και διατήρηση του BSP για το σύνολο S βελτιώνοντας τον αρχικό.
Στην τρίτη ενότητα ασχολούμαστε με τη διατήρηση του Voronoi διαγράμματος (VD) για ένα σύνολο κινούμενων, πιθανώς τεμνόμενων δίσκων στο επίπεδο και
του συμπαγούς Voronoi διαγράμματος για ένα σύνολο μη-τεμνόμενων κυρτών πολυγώνων στο επίπεδο (το συμπαγές VD είναι δυϊκό του VD, αλλά το μέγεθός του είναι
συνάρτηση του αριθμού των πολυγώνων και όχι του αριθμού των κορυφών). Και στις
δύο περιπτώσεις, η επίλυση του προβλήματος ανάγεται στη διατήρηση του δυϊκού
του VD, της τριγωνοποίησης Delaunay DT . Η διατήρηση της DT βασίζεται στο
γεγονός ότι ένα σύνολο τοπικών συνθηκών (έλεγχοι InCircle), πιστοποιούν την ολική
ορθότητα της δομής και τοπικές επιδιορθώσεις είναι πάντα εφικτές. Έτσι, καθώς τα
αντικείμενα κινούνται, έχουμε κάθε στιγμή μια έγκυρη DT και συνεπώς ένα έγκυρο VD.
Τέλος, αναφέρουμε μια KDS για τον εντοπισμό συγκρούσεων μεταξύ δύο απλών πολυγώνων σε κίνηση. Ο αλγόριθμος διατηρεί μια υποδιαίρεση του ελεύθερου χώρου μεταξύ των πολυγώνων, που καλείται external relative geodesic triangulation, η οποία πιστοποιεί τη μη-σύγκρουσή των πολυγώνων. / Kinetic Data Structures (KDSs) are a new framework for designing and
analyzing algorithms for geometrics objects (segments, polygons, disks etc.) in
motion. Our goal is to maintain an attribute of a set of moving objects, for example
the convex hull or the closest pair. The maintenance of the attribute is made through a set of conditions that guarantee the validity of the structure every moment. This set is changed with time due to the motion. The conditions are stored in a queue ordered
chronologically. Every time the attribute is changed, we update the structure and the
queue.
The first chapter is an introduction to the KDSs. We mention basic notions and
ideas of the KDSs, like: configuration function, certificates, critical events.
Furthermore, we discuss their measure of performance.
In the second chapter we deal with the Binary Space Partitions (BSPs), first in
static and then in kinetic environment. Specifically, we present three algorithms for
the maintenance of a BSP for a set of moving segments in the plane. According to the
first known algorithm which was proposed for efficiently maintaining the BSP for a
set of non-intersecting segments S in the plane using the philosophy of KDSs, we
construct a BSP - considering that the segments are static - and then we maintain it as the segments move. The second algorithm is substantially an expansion of the first
algorithm as it deals with the same problem, but for intersecting segments. The set of
the certificates is changed as well as the set of critical events. The third algorithm uses a different technique for the construction and maintenance of the BSP for the set S. It is an improvement of the first algorithm.
In the third chapter, we deal with the maintenance of the Voronoi diagram
(VD) for a set of moving, probably intersecting disks in the plane and the
maintenance of a compact Voronoi-like diagram for a set of non-intersecting, convex
polygons in the plane (compact VD is dual to VD, except that its size is a function of
the number of polygons and not of the number of vertices). In both cases, we solve the
problem by maintaining the dual graph of VD, the Delaunay triangulation (DT ). The
maintenance of the DT is based in the fact that a set of local conditions (InCircle
tests) guarantee the total correctness of the structure and we are able to do only local
changes. So, as the objects move, we have a valid DT every moment and
consequently a valid VD.
Finally, we mention a KDS for detecting collisions between two simple
polygons in motion. In order to do so, we create a planar subdivision of the free space
between the polygons, called External Relative Geodesic Triangulation, which certify their disjointness.
|
100 |
Βελτίωση και αξιοποίηση αποδείκτη θεωρημάτωνΓριβοκωστοπούλου, Φωτεινή 15 March 2010 (has links)
Τα «Συστήματα Αυτόματης Απόδειξης Θεωρημάτων-ΣΑΑΘ» (Automatic Theorem Proving Systems-ATP Systems) είναι συστήματα βασισμένα στη λογική πρώτης τάξεως, τα οποία μπορούν από ένα σύνολο λογικών προτάσεων να συνάγουν την αλήθεια μιας δεδομένης λογικής πρότασης με αυτόματο τρόπο. Η διαδικασία της απόδειξης στα περισσότερα ΣΑΑΘ στηρίζεται στην αρχή της επίλυσης, τον ισχυρότερο κανόνα λογικής εξαγωγής συμπερασμάτων, και την αντίφαση της επίλυσης, μια διαδικασία που εξασφαλίζει την ορθότητα των συμπερασμάτων.
Ο ACT-P είναι ένα ΣΑΑΘ που στηρίζεται στην αρχή της επίλυσης και την αντίφαση της επίλυσης, γραμμένο στο εργαλείο GCLISP Developer 5.0 της Gold-Hill, και διαθέτει μια βιβλιοθήκη γνωστών στρατηγικών ελέγχου της διαδικασίας απόδειξης, προσφέροντας τη δυνατότητα στον χρήστη να ορίσει κάθε φορά ένα (κατάλληλο) συνδυασμό στρατηγικών.
Στην εργασία αυτή έγινε κατ’ αρχήν μεταφορά του ACT-P σε LispWorks, ένα δυναμικότερο εργαλείο ανάπτυξης εφαρμογών σε Lisp. Επιπλέον, ο χρήστης μέσω του νέου παραθυρικού περιβάλλοντος μπορεί να βλέπει δυο διαφορετικές λύσεις του ίδιου προβλήματος, τη συνοπτική και αναλυτική λύση.
Στη συνέχεια, έγινε έλεγχος της καλής λειτουργίας του ACT-P και των στρατηγικών του μέσω δοκιμών με προβλήματα που προέρχονται από την TPTP (Thousands of Problems for Theorem Provers), μια γνωστή βιβλιοθήκη προβλημάτων για ΣΣΑΘ συστήματα στο Διαδίκτυο, και έγιναν οι απαραίτητες διορθώσεις έτσι ώστε να επιλύει προβλήματα από διάφορες κατηγορίες προβλημάτων της βιβλιοθήκης TPTP.
Τέλος, έγινε μια μελέτη χρήσης διαφόρων συνδυασμών στρατηγικών ελέγχου για διάφορα προβλήματα της TPTP και εξήχθησαν χρήσιμα συμπεράσματα για την καταλληλότητά τους και την αποδοτικότητά τους σε σχέση με το είδος των προβλημάτων. / Automatic Theorem Proving Systems (ATP Systems) are based on First Order Logic (FOL) and are able to automatically prove the truth of logical sentence. The proof procedure in most ATP Systems uses the resolution principle which is the strongest existing inference rule, and the resolution refutation process which ensure soundeness of the conclusion.
The ACT-P is an ATP System which uses the resolution principle and the resolution refutation and it is written in GCLISP Developer 5.0 of Gold-Hill. ACT-P has a library of strategies to control the proof process, and gives users the ability to assign to specify a suitable combination of strategies.
In this dissertation a new window based interface is developed for ACTP in Lispworks, which is a powerful tool for developing Lisp applications. The interface gives to the user a more thorough view of the solving process. Moreover, the user can see two different solutions of the problem, the brief and the analytic one.
In addition, the functionality and the strategies of ACTP were tested on problems from the TPTP (Thousands of Problems for Theorem Provers) which is a known library of problems for ATP Systems on the web. ACTP has been improved so as to solve problems from various categories of the TPTP library.
Finally, different strategy combinations for solving problems from various categories of TPTP library were studied, leading to useful conclusions about the suitability and the performance of the different combinations depending on the problems.
|
Page generated in 0.0232 seconds