• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 2
  • Tagged with
  • 111
  • 20
  • 14
  • 9
  • 8
  • 8
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Convergence and parameterization in O-minimal structures

Thomas, Margaret E. M. January 2009 (has links)
No description available.

Constructing fixed points and economic equilibria

Hendtlass, Matthew January 2013 (has links)
Constructive mathematics is mathematics with intuitionistic logic (together with some appropriate, predicative, foundation)-it is often crudely characterised as mathematics without the law of excluded middle. The intuitionistic interpretation of the connectives and quantifiers ensure that constructive proofs contain an inherent algorithm which realises the computational content of the result it proves, and, in contrast to results from computable mathematics, these inherent algorithms come with fixed rates of convergence. The value of a constructive proof lies in the vast array of models for constructive mathematics. Realizability models and the interpretation of constructive ZF set theory into Martin Löf type theory allows one to view constructive mathematics as a high level programing language, and programs have been extracted and implemented from constructive proofs. Other models, including topological forcing models, of constructive set theory can be used to prove metamathematical results, for example, guaranteeing the (local) continuity of functions or algorithms. In this thesis we have highlighted any use of choice principles, and those results which do not require any choice, in particular, are valid in any topos. This thesis looks at what can and cannot be done in the study of the fundamental fixed point theorems from analysis, and gives some applications to mathematical economics where value is given to computability.

Notions and applications of algorithmic randomness

Vermeeren, Stijn January 2013 (has links)
Algorithmic randomness uses computability theory to define notions of randomness for infinite objects such as infinite binary sequences. The different possible definitions lead to a hierarchy of randomness notions. In this thesis we study this hierarchy, focussing in particular on Martin-Lof randomness, computable randomness and related notions. Understanding the relative strength of the different notions is a main objective. We look at proving implications where they exists (Chapter 3), as well as separating notions when the are not equivalent (Chapter 4). We also apply our knowledge about randomness to solve several questions about provability in axiomatic theories like Peano arithmetic (Chapter 5).

The epistemology of abstractionism

Oldemeier, Alexander Christoph Reinhard January 2012 (has links)
I examine the nature and the structure of basic logico-mathematical knowledge. What justifies the truth of the Dedekind-Peano axioms and the validity of Modus Ponens? And is the justification we possess reflectively available? To make progress with these questions, I ultimately embed Hale's and Wright's neo-Fregeanism in a general internalistic epistemological framework. In Part I, I provide an introduction to the problems in the philosophy of mathematics to motivate the investigations to follow. I present desiderata for a fully satisfactory epistemology of mathematics and discuss relevant positions. All these positions turn out to be unsatisfactory, which motivates the abstractionist approach. I argue that abstractionism is in need of further explication when it comes to its central epistemological workings. I fill this gap by embedding neo-Fregeanism in an internalistic epistemological framework. In Part 11, I motivate, outline, and discuss the consequences of the frame- work. I argue: (1) we need an internalistic notion of warrant in our epistemology and every good epistemology accounts for the possession of such warrant; (2) to avoid scepticism, we need to invoke a notion of non-evidential warrant (entitlement); (3) because entitlements cannot be upgraded, endorsing entitlements for mathematical axioms and validity claims would entail that such propositions cannot be claimed to be known. Because of (3), the framework appears to yield sceptical consequences. In Part 111, I discuss (i) whether we can accept these consequences and (ii) whether we have to accept these consequences. As to (i), I argue that there is a tenable solely entitlement- based philosophy of mathematics and logic. However, I also argue that we can over- come limitations by vindicating the neo-Fregean proposal that implicit definitions can underwrite basic logico-mathematical knowledge. One key manoeuvre here is to acknowledge that the semantic success of creative implicit definitions rests on substantial presuppositions - but to argue that relevant presuppositions are entitlements.

Reflection and potentialism

Roberts, Sam January 2016 (has links)
It was widely thought that the paradoxes of Russell, Cantor, and Burali-Forti had been solved by the iterative conception of set. According to this conception, the sets occur in a well-ordered transfinite series of stages. On standard articulations – for example, those in Boolos (1971, 1989) – the sets are implicitly taken to constitute a plurality. Although sets may fail to exist at certain stages, they all exist simpliciter. But if they do constitute a plurality, what could stop them from forming a set? Without a satisfactory answer to this question, the paradoxes threaten to reemerge. In response, it has been argued that we should think of the sets as an inherently potential totality: whatever things there are, there could have been a set of them. In other words, any plurality could have formed a set. Call this potentialism. Actualism, in contrast, is the view that there could not have been more sets than there are: whatever sets there could have been, there are. This thesis explores a particular consideration in favour of actualism; namely, that certain desirable second-order resources are available to the acutalist but not the potentialist. In the first part of chapter 1 I introduce the debate between potentialism and actualism and argue that some prominent considerations in favour of potentialism are inconclusive. In the second part I argue that potentialism is incompatible with the potentialist version of the second-order comprehension schema and point out that this schema appears to be required by strong set-theoretic reflection principles. In chapters 2 and 3 I explore the possibilities for reflection principles which are compatible with potentialism. In particular, in chapter 2 I consider a recent suggestion by Geoffrey Hellman for a modal structural reflection principle, and in chapter 3 I consider some influential proposals by William Reinhardt for modal reflection principles.

Handling uncertainty : from type-1 to interval type-2 fuzzy sets and systems

Aladi, Jabran January 2016 (has links)
Fuzzy logic systems (FLSs) are widely accepted for their ability to model and handle uncertainty. Type-2 fuzzy sets (T2 FSs) were introduced as an extension of type-1 fuzzy sets (T1 FSs). They are characterised by membership functions (MFs) that are themselves fuzzy for which the membership degrees are expressed as FSs on [0,1], and have been widely accepted as capable of modelling uncertainty with more detail than T1 FSs. Interval type-2 fuzzy sets (IT2 FSs) are a special type of (general) T2 FSs and currently the most widely used, due to their reduction in computational cost. The study of T2 FLSs is a rapidly growing research area with a wide range of application domains. Capturing the uncertainty arising from system noise has been a core feature of FLSs for many years. Since the concept of T2 FSs was introduced, a recurring question in research considering the application of T2 FSs is, ‘How much uncertainty in a given context warrants the use of T2 FSs and systems over their T1 counterparts?’ In other words, while a main issue in the application of FLSs is the estimation of parameters such as the type of fuzzy sets (FSs) and their parameters, as well as the number of rules, an even more fundamental question is whether T1 or T2 FSs should be used. More specifically, ‘How should T2 FSs be shaped in order for them to capture the uncertainties in a given application?’ Although there is experimental evidence showing improvements in terms of the uncertainty handling of interval type-2 fuzzy logic systems (IT2 FLSs) over their T1 counterparts, no systematic way of determining the potential advantages of employing T2 FLSs over T1 has yet been developed. In an effort to relate the size of the footprint of uncertainty (FOU) of employed IT2 FSs to uncertainty levels and vice versa in a given application, this thesis shows the relationship between the size of the FOU of IT2 FSs and the uncertainty levels in a given application and explains how this knowledge can be exploited to inform the design of FLSs. To provide insight into this challenging aim, a detailed investigation of the ability of both T1 and IT2 FLSs to model different levels of uncertainty/noise is conducted. Design methodologies that systematically vary (blur) the size of the FOU of the IT2 FSs are introduced, enabling the comparison of FLSs that are equivalent in all but the size of the FOUs of the employed FSs. We describe an application-driven investigation into the relationship between the FOU size of the FSs and the level of uncertainty in applications by using time series prediction (TSP) as a well-defined and well-controlled sample application. Thus, TSP is used as a platform to comprehensively compare different FLSs with various FOUs. Through contrasting the performance of these resulting FLSs in the face of inputs with varying uncertainty levels in a rich set of TSP experiments, a distinct pattern of performance arising from the different levels-of-uncertainty and FOU-size combinations is explored and captured, showing a direct relationship between FOU size and uncertainty levels. For example, as the noise level increases, the FOU size that gives the best performance increases. Based on this, we provide guidelines for the selection of appropriate FOU sizes for given levels of uncertainty in a given application and propose an approach to quantifying the commonly used linguistic labels, ‘low’, ‘medium’ and ‘high’ through FS models. Finally, going beyond the question of selecting the most appropriate FOU at design time, we conduct some initial work on the appropriate adjustment of FOUs at run time, i.e., when uncertainty levels vary. Specifically, we explore the application of optimisation methods to refine FOU sizes in IT2 FSs.

Some uses of cut elimination

Vizcaíno, Pedro Francisco Valencia January 2013 (has links)
This thesis is mainly about Proof Theory. It can be thought of as Proof Theory in the sense of Hilbert, Gentzen, Schutte, Buchholz, Rathjen, and in general what could be called the German school, but it is also influenced by many other branches, of which the bibliography might give an idea. Intuitionism and other philosophical approaches to mathematics are also an important part of what is studied, but the Leitmotif of this thesis is Cut Elimination. The first part of the thesis is concerned with countable coded ω-models of Bar Induction. In this part we work from a reverse mathematics point of view. A study for an ordinal analysis of the theory of Bar Induction (BI) is carried out, and the equivalence between the statement that every set is contained in an co-model of this theory (BI) and the well-ordering principle VX[W0(3E) WO(6x)] which says that if X is a well-ordering, then so is its Bachmann-Howard relativisation, is proven. This is a new result as far as we know, and, we hope, an important one. In the second part of the thesis we shift our viewpoint and consider intuitionistic logic and intuitionistic geometric theories. We show that geometric derivability in classical infinitary logic implies derivability in intuitionistic infinitary logic. Again, our main tool is Cut Elimination. Next, we present investigations regarding minimal logic and classical logical principles, and give a precise classification of excluded middle, ex falso, and double negation elimination. Other themes and roads are possible and, the author feels, important, but time limitations as well as a sickly and utterly daft adherence to deadlines did not permit him to carry out these studies in full. It is quite shameful.

Spherical equidistribution in the space of adelic lattices and its applications

El-Baz, Daniel January 2016 (has links)
The cut-and-project construction is a way of producing "quasiperiodic" point sets in Rd. We start by fixing a lattice in Rd x Rm for some integer m >/1. We then choose a "nice" set W in Rm and project onto ]Rd those lattice points whose projections onto Rm lie in W. By placing a spherical scatterer around each of these points and considering the ballistic motion of a point particle in this array of obstacles, one obtains a Lorentz gas model with a quasiperiodic scatterer configuration. In this setting, Jens Marklof and Andreas Strombergsson proved the existence of a limiting distribution of free path lengths in the BoltzmannGrad limit, complementing previous work concerning random and periodic scatterer configurations. The construction described above can be abstracted such that we may replace Rm with some other locally compact abelian group. In this thesis, we replace Rm with Ad/f where Af is the ring of finite adeles. This enables us to' produce a number of arithmetically interesting point sets, such as the primitive lattice points in Zd, and study the distribution of free path lengths in the corresponding scatterer configurations. The work of Marklof and Strombergsson 'crucially relies on W having non-empty interior and boundary of measure O. In order to generate such sparse sets as the primitive lattice points, both of those conditions must be ' removed. By proving a spherical equidistribution theorem on the space of adelic lattices and a: general probabilistic result, we are nonetheless able to deduce the existence of a limiting distribution for the free path lengths. By the same token, we also derive results about the local statistics of directions in translate's of these point sets. When d = 2, we prove the existence of a universal limiting gap distribution for non-rational translates and that the pair correlation for Diophantine translates is Poissonian.

Development and applications of fuzzy logic systems for reservoir characterisation and modelling

Finol, Jose J. January 2001 (has links)
No description available.

Decision problems for partial specifications: empirical and worst-case complexity

Antonik, Adam January 2008 (has links)
Partial specifications al10w approximate models of systems such as Kripke structures, or labelled transition systems to be created. Using the abstraction possible with these models, an avoidance of the state-space explosion problem is possible, whilst still retaining a structure that can have properties checked over it. A single partial specifkation abstracts a set of systems, whether Kripke, labelled transition systems, or systems with both atomic propositions and named transitions. This thesis deals in part with decision problems arising from desires to efficiently evaluate sentences of the modal J.t-calculus over a partial specification. Partial specifications also al10w a single system to be modeled by a number of pmtial specifications, which abstract away different parts of the system. Alternatively, a number of modal specifications may represent different requirements on a system. The thesis also addresses the question of whether a set of modal specifications is consistent, that is to say, whether a single system exists that is abstracted by each member of the set. This effect of nominals, special atomic propositions true on only one state in a system, is also considered on the problem of the consistency of many modal specifications. The thesis also addresses the question of whether the systems a modal specification abstracts are all abstracted by a second modal specification, the problem of inclusion. The thesis demonstrates how commonly used 'specification patterns', interesting properties expressible in the modal fL-calculus, can be efficiently· evaluated over partial specifications, and gives upper and lower complexity bounds for the problems related to sets of partial specifications.

Page generated in 0.0671 seconds