161 |
Finite-state abstractions for probabilistic computation tree logicWagner, Daniel January 2011 (has links)
Probabilistic Computation Tree Logic (PCTL) is the established temporal logic for probabilistic verification of discrete-time Markov chains. Probabilistic model checking is a technique that verifies or refutes whether a property specified in this logic holds in a Markov chain. But Markov chains are often infinite or too large for this technique to apply. A standard solution to this problem is to convert the Markov chain to an abstract model and to model check that abstract model. The problem this thesis therefore studies is whether or when such finite abstractions of Markov chains for model checking PCTL exist. This thesis makes the following contributions. We identify a sizeable fragment of PCTL for which 3-valued Markov chains can serve as finite abstractions; this fragment is maximal for those abstractions and subsumes many practically relevant specifications including, e.g., reachability. We also develop game-theoretic foundations for the semantics of PCTL over Markov chains by capturing the standard PCTL semantics via a two-player games. These games, finally, inspire a notion of p-automata, which accept entire Markov chains. We show that p-automata subsume PCTL and Markov chains; that their languages of Markov chains have pleasant closure properties; and that the complexity of deciding acceptance matches that of probabilistic model checking for p-automata representing PCTL formulae. In addition, we offer a simulation between p-automata that under-approximates language containment. These results then allow us to show that p-automata comprise a solution to the problem studied in this thesis.
|
162 |
An active-library based investigation into the performance optimisation of linear algebra and the finite element methodRussell, Francis Prem January 2011 (has links)
In this thesis, I explore an approach called "active libraries". These are libraries that take part in their own optimisation, enabling both high-performance code and the presentation of intuitive abstractions. I investigate the use of active libraries in two domains. Firstly, dense and sparse linear algebra, particularly, the solution of linear systems of equations. Secondly, the specification and solution of finite element problems. Extending my earlier (MEng) thesis work, I describe the modifications to my linear algebra library "Desola" required to perform sparse-matrix code generation. I show that optimisations easily applied in the dense case using code-transformation must be applied at a higher level of abstraction in the sparse case. I present performance results for sparse linear system solvers generated using Desola and compare against an implementation using the Intel Math Kernel Library. I also present improved dense linear-algebra performance results. Next, I explore the active-library approach by developing a finite element library that captures runtime representations of basis functions, variational forms and sequences of operations between discretised operators and fields. Using captured representations of variational forms and basis functions, I demonstrate optimisations to cell-local integral assembly that this approach enables, and compare against the state of the art. As part of my work on optimising local assembly, I extend the work of Hosangadi et al. on common sub-expression elimination and factorisation of polynomials. I improve the weight function presented by Hosangadi et al., increasing the number of factorisations found. I present an implementation of an optimised branch-and-bound algorithm inspired by reformulating the original matrix-covering problem as a maximal graph biclique search problem. I evaluate the algorithm's effectiveness on the expressions generated by our finite element solver.
|
163 |
Scalable performance analysis of massively parallel stochastic systemsHayden, Richard Alexander January 2011 (has links)
The accurate performance analysis of large-scale computer and communication systems is directly inhibited by an exponential growth in the state-space of the underlying Markovian performance model. This is particularly true when considering massively-parallel architectures such as cloud or grid computing infrastructures. Nevertheless, an ability to extract quantitative performance measures such as passage-time distributions from performance models of these systems is critical for providers of these services. Indeed, without such an ability, they remain unable to offer realistic end-to-end service level agreements (SLAs) which they can have any confidence of honouring. Additionally, this must be possible in a short enough period of time to allow many different parameter combinations in a complex system to be tested. If we can achieve this rapid performance analysis goal, it will enable service providers and engineers to determine the cost-optimal behaviour which satisfies the SLAs. In this thesis, we develop a scalable performance analysis framework for the grouped PEPA stochastic process algebra. Our approach is based on the approximation of key model quantities such as means and variances by tractable systems of ordinary differential equations (ODEs). Crucially, the size of these systems of ODEs is independent of the number of interacting entities within the model, making these analysis techniques extremely scalable. The reliability of our approach is directly supported by convergence results and, in some cases, explicit error bounds. We focus on extracting passage-time measures from performance models since these are very commonly the language in which a service level agreement is phrased. We design scalable analysis techniques which can handle passages defined both in terms of entire component populations as well as individual or tagged members of a large population. A precise and straightforward specification of a passage-time service level agreement is as important to the performance engineering process as its evaluation. This is especially true of large and complex models of industrial-scale systems. To address this, we introduce the unified stochastic probe framework. Unified stochastic probes are used to generate a model augmentation which exposes explicitly the SLA measure of interest to the analysis toolkit. In this thesis, we deploy these probes to define many detailed and derived performance measures that can be automatically and directly analysed using rapid ODE techniques. In this way, we tackle applicable problems at many levels of the performance engineering process: from specification and model representation to efficient and scalable analysis.
|
164 |
Real time hand pose estimation for human computer interactionKrejov, Philip G. January 2016 (has links)
The aim of this thesis is to address the challenge of real-time pose estimation of the hand. Specifically this thesis aims to determine the joint positions of a non-augmented hand. This thesis focuses on the use of depth, performing localisation of the parts of the hand for efficient fitting of a kinematic model and consists of four main contributions. The first contribution presents an approach to Multi-touch(less) tracking, where the objective is to track the fingertips with a high degree of accuracy without sensor contact. Using a graph based approach, the surface of the hand is modelled and extrema of the hand are located. From this, gestures are identified and used for interaction. We briefly discuss one use case for this technology in the context of the Making Sense demonstrator inspired by the film ”The Minority Report”. This demonstration system allows an operator to quickly summarise and explore complex multi-modal multimedia data. The tracking approach allows for collaborative interactions due to its highly efficient tracking, resolving 4 hands simultaneously in real-time. The second contribution applies a Randomised Decision Forest (RDF) to the problem of pose estimation and presents a technique to identify regions of the hand, using features that sample depth. The RDF is an ensemble based classifier that is capable of generalising to unseen data and is capable of modelling expansive datasets, learning from over 70,000 pose examples. The approach is also demonstrated in the challenging application of American Sign Language (ASL) fingerspelling recognition. The third contribution combines a machine learning approach with a model based method to overcome the limitations of either technique in isolation. A RDF provides initial segmentation allowing surface constraints to be derived for a 3D model, which is subsequently fitted to the segmentation. This stage of global optimisation incorporates temporal information and enforces kinematic constraints. Using Rigid Body Dynamics for optimisation, invalid poses due to self-intersection and segmentation noise are resolved. Accuracy of the approach is limited by the natural variance between users and the use of a generic hand model. The final contribution therefore proposes an approach to refine pose via cascaded linear regression which samples the residual error between the depth and the model. This combination of techniques is demonstrated to provide state of the art accuracy in real time, without the use of a GPU and without the requirement for model initialisation.
|
165 |
interactive Islamic Prayer (iIP)Farsi, Mohammed Abdulwahab R. January 2016 (has links)
The implementation of Virtual Environments has often been used within the educational domain. This study adopts a Virtual Environment (VE) setting to enhance and develop the physical aspects of teaching the Islamic prayer to primary school children, in comparison to traditional forms of teaching through a prayer book and prayer video. An interactive teaching Software, the interactive Islamic Prayer (iIP), was designed and developed for this purpose and uses technology by Microsoft’s Microsoft Kinect 360 for Windows to demonstrate the various movements of the prayer in sequence. Through the administration of a number of questionnaires, a quantitative analysis of the participants’ learning experience were identified, as well as details over which approach the participants preferred. The questionnaires also provided a detailed insight into six areas of study from the learners’ perspective when using the various learning approaches: comprehension, learning experience, interaction, satisfaction, usability and achievement. The results revealed a higher degree of interaction within the lesson on prayer when using the iIP compared to the traditional teaching methods, and although some were unfamiliar with using the Microsoft Kinect 360, on the whole, they found it to be fun and educational. The findings also showed that the software was able to focus on lower level thinking skills, such as recalling information and memory, as a test of the students’ knowledge on the prayer before and after using the software showed a significant improvement in comparison to the other approaches. Recommendations have been given on how to effectively implement this software within these relevant classrooms.
|
166 |
Measurement-based quantum information processing with imperfect operationTame, Mark Simon January 2008 (has links)
Recently, there has. been considerable interest from the quantum information community in a new approach to quantum information processing (QIP) known as the measurement-based (MB) model. The model is based on the use of measurements to manipulate entanglement (quantum correlations) shared' between the elements of multipartite quantum systems in order to carry out processing tasks, such as quantum computation (QC). This Thesis addresses the MB model and its practical operation when imperfections are present. The imperfections consider~d are ip the form of intrinsic systematic noise, natural limitations in the structure of the quantum resources and environment-induced decoherence in a variety of experim~ntalsetups.
|
167 |
2D and 3D computer vision analysis of gaze, gender and ageZhang, W. January 2016 (has links)
Human-Computer Interaction (HCI) has been an active research area for over four decades. Research studies and commercial designs in this area have been largely facilitated by the visual modality which brings diversified functionality and improved usability to HCI interfaces by employing various computer vision techniques. This thesis explores a number of facial cues, such as gender, age and gaze, by performing 2D and 3D based computer vision analysis. The ultimate aim is to create a natural HCI strategy that can fulfil user expectations, augment user satisfaction and enrich user experience by understanding user characteristics and behaviours. To this end, salient features have been extracted and analysed from 2D and 3D face representations; 3D reconstruction algorithms and their compatible real-world imaging systems have been investigated; case study HCI systems have been designed to demonstrate the reliability, robustness, and applicability of the proposed method. More specifically, an unsupervised approach has been proposed to localise eye centres in images and videos accurately and efficiently. This is achieved by utilisation of two types of geometric features and eye models, complemented by an iris radius constraint and a selective oriented gradient filter specifically tailored to this modular scheme. This approach resolves challenges such as interfering facial edges, undesirable illumination conditions, head poses, and the presence of facial accessories and makeup. Tested on 3 publicly available databases (the BioID database, the GI4E database and the extended Yale Face Database b), and a self-collected database, this method outperforms all the methods in comparison and thus proves to be highly accurate and robust. Based on this approach, a gaze gesture recognition algorithm has been designed to increase the interactivity of HCI systems by encoding eye saccades into a communication channel similar to the role of hand gestures. As well as analysing eye/gaze data that represent user behaviours and reveal user intentions, this thesis also investigates the automatic recognition of user demographics such as gender and age. The Fisher Vector encoding algorithm is employed to construct visual vocabularies as salient features for gender and age classification. Algorithm evaluations on three publicly available databases (the FERET database, the LFW database and the FRCVv2 database) demonstrate the superior performance of the proposed method in both laboratory and unconstrained environments. In order to achieve enhanced robustness, a two-source photometric stereo method has been introduced to recover surface normals such that more invariant 3D facia features become available that can further boost classification accuracy and robustness. A 2D+3D imaging system has been designed for construction of a self-collected dataset including 2D and 3D facial data. Experiments show that utilisation of 3D facial features can increase gender classification rate by up to 6% (based on the self-collected dataset), and can increase age classification rate by up to 12% (based on the Photoface database). Finally, two case study HCI systems, a gaze gesture based map browser and a directed advertising billboard, have been designed by adopting all the proposed algorithms as well as the fully compatible imaging system. Benefits from the proposed algorithms naturally ensure that the case study systems can possess high robustness to head pose variation and illumination variation; and can achieve excellent real-time performance. Overall, the proposed HCI strategy enabled by reliably recognised facial cues can serve to spawn a wide array of innovative systems and to bring HCI to a more natural and intelligent state.
|
168 |
Strategic logics : complexity, completeness and expressivityWalther, Dirk January 2007 (has links)
by transferring normative attributes from an agent to another. Such interactions are called delegation. Formal models of delegation and control were studied in, e.g., [189, 149, 191]. In this work, we consider the scenario where agents delegate control over propositions to other agents. The distinction between controllable and uncontrollable propositions stems from areas like discrete event systems and control theory, where, e.g., Boutilier [39] studied control in the context of deontic logic. Control and controllable propositions were also studied in [52, 66, 249, 248]. We now give an overview of the thesis. The main purpose of Chapter 2 is to introduce basic concepts and notation and to review relevant literature. The first section presents a brief survey on modal logic. Then, in sections 2.2, 2.3 and 2.4, we introduce epistemic, temporal and strategic modal logics and state known results that characterise their expressivity and computational complexity. In particular, we consider variants of ATL as extensions of branching-time logics. With such ATL-like logics we can describe dynamic multi-agent interactions. In Section 2.5, we discuss extensions of ATL with epistemic notions. Additionally, we suggest a framework for memory-bounded strategic reasoning. In particular, we introduce an epistemic variant of ATL that accounts for agents with limited memory resources as this case was neglected in the literature to date. In Chapter 3, we investigate the computational complexity of ATL and its epistemic extension ATEL. We show in detail how 'the complexity of the satisfiability problem for both logics can be settled at ExpTIME-complete. The part of the chapter about ATL is based on the paper 'ATL Satisfiability is Indeed ExpTIME-COmplete' by Walther, Lutz, Wolter and Wooldridge in the Journal of Logic and Computation, 2006 (265)' and the part about ATEL is based on the paper 'ATEL with Common and Distributed Knowledge is ExpTime-Complete' by Walther which was presented at the 4th Workshop on Methods for Modalities, Humbolt University, Berlin, December 1-2, 2005 [264]. In Chapter 4, we aim to extend the expressiveness of ATL without increasing its computational complexity. We introduce explicit names for strategies in the object language and extend modal operators with the possibility to bind agents to strategy names. In this way, we can fix the decisions of agents that possibly belong to several coalitions. By identifying the behaviqur of agents, we can reason about the effects of agents changing coalitions. Dynamic coalitions provide more flexibility to adapt abilities to a changing environment. We investigate the expressivity of the resulting logic ATLES and compare it to ATL and ATL*. Moreover, we formulate two model checking problems for ATLES and investigate their complexity as well as the complexity of the satisfiability problem for ATLES. Additionally, we present a complete axiomatisation. This chapter is based on the paper 'Alternating-time Temporal Logic with Explicit Strategies' by Walther, van der Hoek and Wooldridge which is going to presented at the 11th Conference on Theoretical Aspects of Rationality and Knowledge (TARK), Brussels, Belgium, June 25-27, 2007 [266].
|
169 |
Monitoring individual animals through a collaborative crowdsourcing and citizen science platformMason, Aaron D. January 2016 (has links)
Improvements in communication technology means that increasing numbers of people around the world can share information with increasing ease. This information is forming knowledge in forms that was not previously conventionally possible. It is enabling new communities to be formed. This research aimed to determine how this data could be exploited and combined with additional complementary tools to enable automated large-scale non-intrusive monitoring of wildlife, and in particular keystone species. Three proof-of-concept research studies explored automated camera traps, citizen science and large-scale crowdsourcing to determine the potential of a system that combines this technology and its use for automated monitoring of wild animals. The results demonstrated that internet-connected camera traps are capable of collecting valuable visual data at a large-scale. However, for keystone species, such as tigers, the scale required for monitoring presents technical and economic challenges. The participation of citizen scientists to collect and analyse data demonstrated a potential monitoring mechanism. However, the volume of data provided for such a focused practice proved insufficient for accurate large-scale monitoring. The Wildsense project, which used publicly-available image data from the Web as its primary data source demonstrated that there is additional data available that can be processed with the participation of citizen scientists. The popularity and overall interest towards this project showed that crowdsourcing is a viable method for collecting relevant data for animal monitoring. It was concluded that the proof-of-concept experiments completed provided evidence that there is a potential to monitor individual animals through an automated approach and a system architecture is proposed. There is potential for automated large scale monitoring using the proposed framework. However, there are significant challenges to overcome and multiple directions for future work are recommended for exploration.
|
170 |
A Theory of Database Schemata : Studies in Conceptual and Relational SchemataOwlett, J. January 1979 (has links)
No description available.
|
Page generated in 0.0462 seconds