• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 2
  • Tagged with
  • 111
  • 20
  • 14
  • 9
  • 8
  • 8
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

A study of normalisation through subatomic logic

Aler Tubella, Andrea January 2017 (has links)
We introduce subatomic logic, a new methodology where by looking inside of atoms we are able to represent a wide variety of proof systems in such a way that every rule is an instance of a single, regular, linear rule scheme. We show the generality of the subatomic approach by presenting how it can be applied to several different systems with very different expressivity. In this thesis we use subatomic logic to study two normalisation procedures: cut-elimination and decomposition. In particular, we study cut-elimination by characterising a whole class of substructural logics and giving a generalised cut-elimination procedure for them, and we study decomposition by providing generalised rewriting rules for derivations that we can then apply to decompose derivations and to eliminate cycles.
72

Model selection and model averaging in the presence of missing values

Gopal Pillay, Khuneswari January 2015 (has links)
Model averaging has been proposed as an alternative to model selection which is intended to overcome the underestimation of standard errors that is a consequence of model selection. Model selection and model averaging become more complicated in the presence of missing data. Three different model selection approaches (RR, STACK and M-STACK) and model averaging using three model-building strategies (non-overlapping variable sets, inclusive and restrictive strategies) were explored to combine results from multiply-imputed data sets using a Monte Carlo simulation study on some simple linear and generalized linear models. Imputation was carried out using chained equations (via the "norm" method in the R package MICE). The simulation results showed that the STACK method performs better than RR and M-STACK in terms of model selection and prediction, whereas model averaging performs slightly better than STACK in terms of prediction. The inclusive and restrictive strategies perform better in terms of prediction, but non-overlapping variable sets performs better for model selection. STACK and model averaging using all three model-building strategies were proposed to combine the results from a multiply-imputed data set from the Gateshead Millennium Study (GMS). The performance of STACK and model averaging was compared using mean square error of prediction (MSE(P)) in a 10% cross-validation test. The results showed that STACK using an inclusive strategy provided a better prediction than model averaging. This coincides with the results obtained through a mimic simulation study of GMS data. In addition, the inclusive strategy for building imputation and prediction models was better than the non-overlapping variable sets and restrictive strategy. The presence of highly correlated covariates and response is believed to have led to better prediction in this particular context. Model averaging using non-overlapping variable sets performs better only if an auxiliary variable is available. However, STACK using an inclusive strategy performs well when there is no auxiliary variable available. Therefore, it is advisable to use STACK with an inclusive model-building strategy and highly correlated covariates (where available) to make predictions in the presence of missing data. Alternatively, model averaging with non-overlapping variables sets can be used if an auxiliary variable is available.
73

Recursion on sets

Hoole, M. R. R. January 1982 (has links)
No description available.
74

Algorithms for recursive Frisch scheme identification and errors-in-variables filtering

Linden, J. G. January 2008 (has links)
This thesis deals with the development of algorithms for recursive estimation within the errors-in-variables framework. Within this context attention is focused on two major threads of research: Recursive system identification based on the Frisch scheme and the extension and application of errors-in-variables Kalman filtering techniques. In the first thread, recursive algorithms for the approximate update of the estimates obtained via the Frisch scheme, which makes use of the Yule-Walker model selection criterion, are developed for the case of white measurement noise. Gradient-based techniques are utilised to update the Frisch scheme equations, which involve the minimisation of the model selection criterion as well as the solution of an eigenvalue problem, in a recursive manner. The computational complexity of the resulting algorithms is critically analysed and, by introducing additional approximations, fast recursive Frisch scheme algorithms are developed, which reduce the computational complexity from cubic to quadratic order. In addition, it is investigated how the singularity condition within the Frisch scheme is affected when the estimates are computed recursively. Whilst this first group of recursive Frisch scheme algorithms is developed directly from the offline Frisch scheme equations, it is also possible to interpret the Frisch scheme within an extended bias compensating least squares framework. Consequently, the development of recursive algorithms, which update the estimate obtained from the extended bias compensated least squares technique, is considered. These algorithms make use of the bilinear parametrisation principle or, alternatively, the variable projection method. Finally, two recursive Frisch scheme algorithms are developed for the case of coloured output noise. The second thread, which considers the theory of errors-in-variables filtering for linear systems, extends the approach to deal with a class of bilinear systems, a frequently used subset of nonlinear systems. The application of errors-in-variables filtering for the purpose of system identification is also considered. This leads to the development of a prediction error method based on symmetric innovations, which resembles the joint output method. Both the offline and online implementation of this novel identification technique are investigated.
75

Common metrics for cellular automata models of complex systems

Johnson, William January 2015 (has links)
The creation and use of models is critical not only to the scientific process, but also to life in general. Selected features of a system are abstracted into a model that can then be used to gain knowledge of the workings of the observed system and even anticipate its future behaviour. A key feature of the modelling process is the identification of commonality. This allows previous experience of one model to be used in a new or unfamiliar situation. This recognition of commonality between models allows standards to be formed, especially in areas such as measurement. How everyday physical objects are measured is built on an ingrained acceptance of their underlying commonality. Complex systems, often with their layers of interwoven interactions, are harder to model and, therefore, to measure and predict. Indeed, the inability to compute and model a complex system, except at a localised and temporal level, can be seen as one of its defining attributes. The establishing of commonality between complex systems provides the opportunity to find common metrics. This work looks at two dimensional cellular automata, which are widely used as a simple modelling tool for a variety of systems. This has led to a very diverse range of systems using a common modelling environment based on a lattice of cells. This provides a possible common link between systems using cellular automata that could be exploited to find a common metric that provided information on a diverse range of systems. An enhancement of a categorisation of cellular automata model types used for biological studies is proposed and expanded to include other disciplines. The thesis outlines a new metric, the C-Value, created by the author. This metric, based on the connectedness of the active elements on the cellular automata grid, is then tested with three models built to represent three of the four categories of cellular automata model types. The results show that the new C-Value provides a good indicator of the gathering of active cells on a grid into a single, compact cluster and of indicating, when correlated with the mean density of active cells on the lattice, that their distribution is random. This provides a range to define the disordered and ordered state of a grid. The use of the C-Value in a localised context shows potential for identifying patterns of clusters on the grid.
76

On the complexities of polymorphic stream equation systems, isomorphism of finitary inductive types, and higher homotopies in univalent universes

Sattler, Christian January 2015 (has links)
This thesis is composed of three separate parts. The first part deals with definability and productivity issues of equational systems defining polymorphic stream functions. The main result consists of showing such systems composed of only unary stream functions complete with respect to specifying computable unary polymorphic stream functions. The second part deals with syntactic and semantic notions of isomorphism of finitary inductive types and associated decidability issues. We show isomorphism of so-called guarded types decidable in the set and syntactic model, verifying that the answers coincide. The third part deals with homotopy levels of hierarchical univalent universes in homotopy type theory, showing that the n-th universe of n-types has truncation level strictly n+1.
77

Ασαφή δίκτυα Petri

Κυρίτσης, Χαρίλαος 13 April 2009 (has links)
Η εργασία ασχολείται με τα Ασαφή Δίκτυα Petri. Αφού γίνει μιά ιστορική αναδρομή στα Δίκτυα Petri, στη συνέχεια αναπτύσονται τα διάφορα είδη δικτύων Petri καθώς επίσης και οι ιδιότητές τους. Επιπλέον, αναλύονται κάποιες βασικές έννοιες σχετικά με τις υπολειμματικές δομές, τα ασαφή σύνολα, την ασαφή λογική, τις ομάδες δικτυωτής διάταξης και τις MV-Άλγεβρες. Αυτές οι έννοιες είναι απαραίτητες για τα Ασαφή Δίκτυα Petri και κυρίως για την ανάπτυξη της λεγόμενης Petri Άλγεβρας. Τέλος αναλύονται κάποιες εφαρμογές των Ασαφών Δικτύων Petri. / This project is about Fuzzy Petri Nets. After a historical introduction in Petri Nets, we analyze the different types of Petri Nets as well as their properties. Furthermore, we focus on some basic concepts of the residuated structures, fuzzy sets, fuzzy logic, lattice-ordered abelian groups and MV-Algebras. These concepts are essential for Fuzzy Petri Nets and Petri Algebras. Finally, we analyze some basic applications of Fuzzy Petri Nets.
78

Non-perturbative renormalization and low mode averaging with domain wall fermions

Arthur, Rudy January 2012 (has links)
This thesis presents an improved method to calculate renormalization constants in a regularization invariant momentum scheme using twisted boundary conditions. This enables us to simulate with momenta of arbitrary magnitude and a fixed direction. With this new technique, together with non-exceptional kinematics and volume sources, we are able to take a statistically and theoretically precise continuum limit. Thereafter, all the running of the operators with momentum scale is due to their anomalous dimension. We use this to develop a practical scheme for step scaling with off shell vertex functions. We develop the method on 16³ × 32 lattices to show the practicality of using small volume simulations to step scale to high momenta. We also use larger 24³×64 and 32³×64 lattices to compute renormalization constants very accurately. Combining these with previous analyses we are able to extract a precise value for the light and strange quark masses and the neutral kaon mixing parameter BK. We also analyse eigenvectors of the domain wall Dirac matrix. We develop a practical and cost effective way to compute eigenvectors using the implicitly restarted Lanczos method with Chebyshev acceleration. We show that calculating eigenvectors to accelerate propagator inversions is cost effective when as few as one or two propagators are required. We investigate the technique of low mode averaging (LMA) with eigenvectors of the domain wall matrix for the first time. We find that for low energy correlators, pions for example, LMA is very effective at reducing the statistical noise. We also calculated the η and η′ meson masses, which required evaluating disconnected correlation functions and combining stochastic sources with LMA.
79

Novel methods of measuring the similarity and distance between complex fuzzy sets

McCulloch, Josie C. January 2016 (has links)
This thesis develops measures that enable comparisons of subjective information that is represented through fuzzy sets. Many applications rely on information that is subjective and imprecise due to varying contexts and so fuzzy sets were developed as a method of modelling uncertain data. However, making relative comparisons between data-driven fuzzy sets can be challenging. For example, when data sets are ambiguous or contradictory, then the fuzzy set models often become non-normal or non-convex, making them difficult to compare. This thesis presents methods of comparing data that may be represented by such (complex) non-normal or non-convex fuzzy sets. The developed approaches for calculating relative comparisons also enable fusing methods of measuring similarity and distance between fuzzy sets. By using multiple methods, more meaningful comparisons of fuzzy sets are possible. Whereas if only a single type of measure is used, ambiguous results are more likely to occur. This thesis provides a series of advances around the measuring of similarity and distance. Based on them, novel applications are possible, such as personalised and crowd-driven product recommendations. To demonstrate the value of the proposed methods, a recommendation system is developed that enables a person to describe their desired product in relation to one or more other known products. Relative comparisons are then used to find and recommend something that matches a person's subjective preferences. Demonstrations illustrate that the proposed method is useful for comparing complex, non-normal and non-convex fuzzy sets. In addition, the recommendation system is effective at using this approach to find products that match a given query.
80

Reasoning by analogy in inductive logic

Hill, Alexandra January 2013 (has links)
This thesis investigates ways of incorporating reasoning by analogy into Pure (Unary) Inductive Logic. We start with an analysis of similarity as distance, noting that this is the conception that has received most attention in the literature so far. Chapter 4 looks in some detail at the consequences of adopting Hamming Distance as our measure of similarity, which proves to be a strong requirement. Chapter 5 then examines various adaptations of Hamming Distance and proposes a subtle modification, further-away-ness, that generates a much larger class of solutions.

Page generated in 0.0926 seconds