1 |
Continuous-valued probabilistic neural computation in VLSIChen, Xin January 2004 (has links)
As interests in implantable devices and bio-electrical systems grow, intelligent embedded systems become important for extracting useful information from continuous-valued, noisy and drifting biomedical signals at sensory interfaces. Probabilistic generative models utilise stochasticity to represent the natural variability of real data, and therefore suggest a potential approach to this application. However, few probabilistic models are amenable to VLSI implementation. This thesis explores the feasibility of realising continuous-valued probabilistic behaviour in VLSI, which may subsequently underpin an intelligent embedded system. In this research, a probabilistic generative model that can model continuous data, with a simple and hardware-amenable training algorithm, has been developed. Based on stochastic computing units with Gaussian noise inputs, this model can adapt its "internal noise" to represent the variability (external noise) of real data. The training algorithm requires only one step of Gibbs sampling and is thus computationally inexpensive in both software and hardware. The capabilities of the model are demonstrated and explored with both artificial and real data. By translating this probabilistic generative model into VLSI implementation, a VLSI system with continuous-valued probabilistic behaviour and on-chip adaptability is further implemented. This not only demonstrates the feasibility of realising continuous-valued probabilistic behaviour in VLSI, but also provides a platform for studying the utility and on-chip adaptability of continuous-valued probabilistic behaviour in VLSI. The system's ability both to model and to regenerate continuous data distributions are explored. As the probabilistic behaviour is introduced by artificially-generated noise, this VLSI system demonstrates computation with noise-induced, continuous-valued probabilistic behaviour in VLSI, and points towards a potential candidate for an intelligent embedded system.
|
2 |
Usability analysis of multiple embodied conversational agents in eBanking servicesCollin, Stuart David January 2005 (has links)
This thesis seeks to investigate the usability issues surrounding multiple embodied conversational agents in assessing the effectiveness of systems employing multiple agents, especially as to date, little work has been carried out in this field. A history of conversational embodied agents is presented, along with design and interface development used throughout the thesis. Four incremental, empirical studies are presented that are designed to investigate the usability of multiple agents in varying manners, in 3D virtual banking environments. Firstly multiple agents are compared to a single agent, in a scenario where information and dialogue are similar in both versions. Whilst limitations are pointed out with the multiple agent design, promising results are reported which warrant further research. Two further empirical evaluations are reported in an attempt to optimise the usability of multiple embodied conversational agents in an eBanking environment. A fourth and final evaluation revisits the issue of comparing a single agent version to an optimised multiple agent version. This demonstrates the advantages of optimisations derived from the previous experiments, in making the multiple agent version as usable as the single agent version. The conclusions from this body of research support the use and further research of multiple embodied conversational agents in future eBanking services.
|
3 |
Biometric face recognition using multilinear projection and artificial intelligenceAl-Shiha, Abeer A. Mohamad January 2013 (has links)
Numerous problems of automatic facial recognition in the linear and multilinear subspace learning have been addressed; nevertheless, many difficulties remain. This work focuses on two key problems for automatic facial recognition and feature extraction: object representation and high dimensionality. To address these problems, a bidirectional two-dimensional neighborhood preserving projection (B2DNPP) approach for human facial recognition has been developed. Compared with 2DNPP, the proposed method operates on 2-D facial images and performs reductions on the directions of both rows and columns of images. Furthermore, it has the ability to reveal variations between these directions. To further improve the performance of the B2DNPP method, a new B2DNPP based on the curvelet decomposition of human facial images is introduced. The curvelet multi- resolution tool enhances the edges representation and other singularities along curves, and thus improves directional features. In this method, an extreme learning machine (ELM) classifier is used which significantly improves classification rate. The proposed C-B2DNPP method decreases error rate from 5.9% to 3.5%, from 3.7% to 2.0% and from 19.7% to 14.2% using ORL, AR, and FERET databases compared with 2DNPP. Therefore, it achieves decreases in error rate more than 40%, 45%, and 27% respectively with the ORL, AR, and FERET databases. Facial images have particular natural structures in the form of two-, three-, or even higher-order tensors. Therefore, a novel method of supervised and unsupervised multilinear neighborhood preserving projection (MNPP) is proposed for face recognition. This allows the natural representation of multidimensional images 2-D, 3-D or higher-order tensors and extracts useful information directly from tensotial data rather than from matrices or vectors. As opposed to a B2DNPP which derives only two subspaces, in the MNPP method multiple interrelated subspaces are obtained over different tensor directions, so that the subspaces are learned iteratively by unfolding the tensor along the different directions. The performance of the MNPP has performed in terms of the two modes of facial recognition biometrics systems of identification and verification. The proposed supervised MNPP method achieved decrease over 50.8%, 75.6%, and 44.6% in error rate using ORL, AR, and FERET databases respectively, compared with 2DNPP. Therefore, the results demonstrate that the MNPP approach obtains the best overall performance in various learning scenarios.
|
4 |
Intuitive ontology authoring using controlled natural languageDenaux, Ronald January 2013 (has links)
Ontologies have been proposed and studied in the last couple of decades as a way to capture and share people's knowledge about the world in a way that is processable by computer systems. Ontologies have the potential to serve as a bridge between the human conceptual understanding of the world and the data produced, processed and stored in computer systems. However, ontologies so far have failed to gather widespread adoption, failing to realise the original vision of the semantic web as a next generation of the world wide web: where everyone would be able to contribute and interlink their data and knowledge as easily as they can contribute and interlink their websites. One of the main reasons for this lack of widespread adoption of ontologies is the steep learning curve for authoring them: most people find it too dfficult to learn the syntax and formal semantics of ontology languages. Most research has tried to alleviate this problem by finding ways to help people to collaborate with knowledge engineers when building ontologies; this approach however, requires the wide availability of knowledge engineers, who in practice are scarce. In the context of the semantic web, recent research has started looking at ways to directly capture knowledge from domain experts as ontologies. One such approach advocates the use of Controlled Natural Languages (CNL) as a promising way to alleviate the syntactical impediment to writing ontological constructs. However, not much is yet known about the capabilities and limitations of CNL-based ontology authoring by domain experts. It is also unknown what type of automatic tool support can and should be provided to novice ontology authors, although such intelligent tool support is becoming possible due to advances in reasoning with existing ontologies and other related areas such as natural language processing. This PhD investigates how CNL-based ontology authoring systems can make ontology authoring more accessible to domain experts by providing intelligent tool support. In particular, this thesis iteratively investigates the impact of providing various types of intelligent tool support for authoring ontologies using the Web Ontology Language (OWL) and a controlled natural language called Rabbit. After each iteration of added tool support, we evaluate how it impacts the ontology authoring process and what are the main limitations of the resulting ontology authoring system. Based on the found limitations, we decide which further tool support would be most beneficial to novice ontology authors. This methodology resulted in iteratively providing support for (i) understanding the syntactic capabilities and limitations of the chosen controlled natural language; (ii) following appropriate ontology engineering methodologies; (iii) fostering awareness about the logical consequences of adding new knowledge to an ontology and (iv) interacting with the ontology authoring system via dialogues. The main contributions of this PhD are (i) showing that domain experts benefit from guidance about the ontology authoring process and understandable syntax error messages for finding the correct CNL syntax; (ii) the definition of a framework to integrate the syntactical and semantic analyses of ontology authors' inputs; (iii) showing that intuitive feedback about the integration of ontology authors' inputs into an existing ontology benefits ontology authors as they become aware of potential ontology defects; (iv) the definition of a framework to analyse and describe ontology authoring in terms of dialogue moves and their discourse structure.
|
5 |
Ordering based decision makingChen, Shuwei January 2014 (has links)
Decision making is the crucial step in many real applications such as financial planning, organization management, products evaluation and recommendation. Qualitative information is widely used for expressing evaluations or preferences of experts among alternatives under different criteria. In many cases, decision making is to order alternatives and select the top one or few in the rank ordering of the alternatives. Orderings provide a very natural and effective way of resolving indeterminate situations in real life decision making problems. This thesis focuses on the representation and reasoning with qualitative ordering information for decision making, ordering based decision making. Such a decision making paradigm reflects the qualitative nature of various decision making scenarios where the available information for decision making can only be preferential ordering comparisons between decision alternatives and numerical approximation is not available or needed. This thesis proposes a lattice-ordered linguistic-valued logic based reasoning framework for multi-criteria decision making problems where the qualitative preferences from experts are in lattice order, a consensus group decision making model with partially ordered preference associated with belief degrees, and an automated reasoning based hierarchical framework for video based human activity recognition. This ordering based reasoning and decision making research aims to provide an alternative qualitative framework for handling uncertain ordering information in decision making problems. This research intends to enhance the quantitative theory of decision science with qualitative, algebraic and logic-oriented approaches for representing, aggregating and reasoning with ordering information.
|
6 |
Reconfigurable hardware-based multi-agent systems for capital markets tradingGerlein, Eduardo January 2015 (has links)
The use of High Performance Computing (HPC) in capital markets has witnessed considerable growth in the past decade. In particular, electronic trading in globalized markets and exchanges requires sophisticated communication and data management to support the massive amount of incoming streaming data where the main problem is in latency management. In addition, novel algorithms for trading may incorporate computational intelligence techniques in order to implement and improve the current decision making process. Multi-Agent Systems (MAS) have been recognized as a feasible solution to address complex problems in many areas and appear an innovative, powerful and flexible solution for implementing trading engines. On other hand, reconfigurable hardware and in particular Field Programmable Gate Arrays (FPGA) offers performance benefits over conventional software implementations, and seems to be the next logical step in the development of multi-agent technology. However, only a very limited number of projects have reported multi-agent implementations in reconfigurable hardware. Current agent oriented programming (AOP) methodologies are not entirely appropriate for the design and deployment of MAS at microchip level, making agents in hardware difficult to engineer. This arises because there is no clear methodology for their design that incorporates a similar level of conceptualization to software implementations, while at the same time takes into account the specific requirements for FPGAs. This thesis presents as its main contributions a novel methodology to implement MAS in FPGA using the Event-Driven Reactive Architecture (EDRA) at agent level and a hierarchical Network-on- Chip (NoC) approach at societal level, presenting an agent-based trading engine as a validation scenario. EDRA is proposed to design and implement the internal architecture of hardware-based scenario. EDRA is proposed to design and implement the internal architecture of hardware-based agents allowing one to overcome the absence of a well-defined procedure to model and deploy a MAS in reconfigurable hardware. It uses a fine-grained task decomposition inside agents to generate reactive behaviours and link them with consistent hardware interfaces to enable internal flow of information, favouring modular constructions, flexibility and re-use of structures. The communication model at societal level consists of a combination of a Star-NoC topology, scaling in a hierarchical fashion by means of the integration of lower level clusters of agents and routers, in conjunction with a message broadcast mechanism through standardized interfaces using the Open Core Protocol (OCP). A router microarchitecture and network adapters are designed to interface EDRA agents into the NoC. In conjunction, EDRA and the Star-NoC allow for the design of Multi- Agent System-on-Chip (MASoC), extending an agent oriented design model to the realm of hardware design. Furthermore, this thesis demonstrates how to use the proposed model to design and deploy an agent-based trading engine, implemented in an Altera Stratix IV FPGA. With this application an agent-based High Performance Computing platform for financial applications is created, but a Machine Learning (ML) technique as a mean to increment the agent’s cognitive capabilities is also included, achieving a performance adequate for practical High Frequency Trading (HFT) applications.
|
7 |
An operating environment for large scale virtual realityPettifer, Stephen Robert January 1999 (has links)
Improvements in processors and graphics hardware are making Virtual Reality (VR) increasingly attractive as means of human/computer interaction. Although there are compelling stand-alone demonstrations of specific aspects of the field, there is little in the way of generic software that supports and integrates the needs of large or complex virtual environments. Future VR systems will require 'large scale' issues to be addressed. These include complex graphics and behaviour of objects and large numbers of applications and geographically distributed users. However, beyond the graphics and networking challenges, a core issue is the specification of the environments themselves. For sophisticated environments, more appropriate methods are required than the conventional programming tools provided by existing VR systems. For virtual environments, this specification approaches a description of the metaphysical properties of the space being described and the objects it contains. In such a specification, the relationship between the user and the environment is an important consideration. This thesis argues that by building a framework for describing virtual environments upon such a metaphysically orientated s~ecification of the environments, that takes explicit account of the relation between perceiver and the perceived, many of the existing problems of VR can be successfully addressed. A VR systems architecture is developed that supports investigation of this perceptual divide between perceiver and perceived, and is demonstrably capable of facilitating these investigations in the context of 'large-scale' VR as defined above.
|
8 |
Effectiveness of explanation facilities for intelligent systemsDarlington, Keith January 2014 (has links)
This report has been prepared as the cover paper for submission for the award of a “PhD by Publication”. The dissertation investigates the effectiveness of explanation facilities for intelligent systems as a PhD submission by publication. The report is based on a series of my publications. There are thirteen chosen publications in this portfolio (see Appendices 2-14), two of which are chapters taken from two of my published books and a brief summary and description of these publications is given in appendix 1 (PhD by Publication: Registration Paper). The purpose of this report is to provide an account of the themes that give the publications described within this portfolio their coherence regarding the effectiveness of explanation facilities for intelligent systems. The papers described in this portfolio are not necessarily presented chronologically because I wanted to present them in accordance with the best conceptual way in which, I believe, the contents naturally dovetail. My main contributions to knowledge were in three areas: general explanation design methods for symbolic expert systems, applications of explanation facilities in the healthcare domain, and the general applicability of explanation facilities in other intelligent system technologies. My main findings show that a strong case can be made for the inclusion of explanation facilities in expert systems – particularly providing justification explanations types. Furthermore, it is recommended that designers of explanation for healthcare expert systems should give careful consideration to both the stakeholders and the nature of the clinical tasks undertaken. This recommendation could apply to other application domains. Finally, symbolic AI methods, such as heuristic rule-based expert systems or case-based reasoning techniques are better suited to explanation than non-symbolic paradigms, such as neural networks. Rule extraction techniques offer an effective way in which opaque technologies can deliver explanation facilities by mapping output data to rules which are amenable to natural explanation. Applications of XML, such as Rule ML can be used transform and disseminate these applications on the World Wide Web.
|
9 |
Artificial intelligence for cognitive agents and intelligent environmentsSuliman, Hussam January 2007 (has links)
The objective ofthis research is to create intelligent and cognitive agents that resemble animal or human characters and provide an architecture in which to do so. This requires computational models and the formulation of ontology for agents and their environments that is well structured for software and efficient computer simulation. This research concludes by presenting GOY A - an integrated cognitive agent architecture for the specification of cognitive agents simulated in virtual environments. Cognitive features modelled for GOYA include sensory and spatial perception, attention, and human memory are among the most important. The cognitive systems modelled aim to be practical, plausible, and contribute to produce believable and intelligent agents in virtual environments such as computer games.
|
10 |
Lifted heuristics : towards more scalable planning systemsRidder, Bernardus January 2014 (has links)
In this work we present a new lifted forward-chaining planning system which uses new heuristics and introduces novel pruning techniques which can solve problem instances which - until now - cannot be solved by contemporary planners due to grounding. State-of-the-art planning systems rely on grounding, enumerating all possible actions, before search can begin. Grounding is a necessary step for these planners because their domain analysis, heuristic computation, pruning strategies and even search strategies need this information as a prerequisite. This grounding step is an essential step for most (if not all) state-of-the-art planning systems. A few planning systems use lazy evaluation which means that an action is only grounded when it is needed. But in these strategies, for most domains, the set of actions that need to be grounded are all the actions so this does not solve the underlying problem. This thesis presents two new heuristics - called lifted relaxed planning graph heuristic and lifted causal graph heuristic - that do not require the planning domain to be grounded. This makes our planning system applicable to larger problem instances because we have smaller memory constraints compared to state-of-the-art forward chaining planners. Heuristics have been presented in the past which did not require grounding (for example least-commitment planners like Partial-Order Planners), but the weakness of their heuristics prevents them to compete with the state of the art. The heuristics presented in this thesis compare favourably to the state-of-the-art. We build on previous work done on symmetry breaking in order to abstract the planning problem and prune the search space. Symmetry relationships explored in the past are quite restrictive and are only useful in problems which are highly symmetrical. We relax this definition and build upon almost symmetry which finds more symmetrical relationships and allows us to construct the data structures like the lifted relaxed planning graph and lifted transition graph using less memory and time.
|
Page generated in 0.0155 seconds