• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 6
  • 6
  • 6
  • 6
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Computational Intelligent Systems: Evolving Dynamic Bayesian Networks

Osunmakinde, Isaac 01 December 2009 (has links)
Dynamic Bayesian Networks (DBNs) are temporal probabilistic models for reasoning over time. They often formulate the core reasoning component of intelligent systems in the field of machine learning. Recent studies have focused on the development of some DBNs such as Hidden Markov Models (HMMs) and their variants, which are explicitly represented by highly skilled users and have gained popularity in speech recognition. These varieties of HMMs represented as DBNs have contributed to the baseline of temporal modelling. However they are limited in their expressive power as they are approximated and pose difficult challenges for users when choosing the appropriate model for diverse real-life applications. To worsen the situation further, researchers and practitioners have also stressed that applications often have difficulties when evolving (or learning) such network models from environments captured as massive datasets, due to the ongoing predominance of computational intensity (or nondeterministic polynomial (NP) time hard). Finding solutions to these challenges is a difficult task. In this thesis, a new class of temporal probabilistic modelling, called evolving dynamic Bayesian networks (EDBN), is proposed and demonstrated to make technology easier so as to accommodate both experts and non-experts, such as industrial practitioners, decision-makers, researchers, etc. Dynamic Bayesian Networks (DBNs) are ideally suited to achieve situation awareness, in which elements in the environment must be perceived within a volume of time and space, their meaning understood, and their status predicted in the near future. The use of Dynamic Bayesian Networks in achieving situation awareness has been poorly explored in current research efforts. This research completely evolves DBNs automatically from any environment captured as multivariate time series (MTS) which minimizes the approximations and mitigates the challenges of choice of models. This potentially accommodates both highly skilled users and non-expert practitioners, and attracts diverse real-world application areas for DBNs. The architecture of our EDBN uses a combined strategy as it resolves two orthogonal issues to address the challenging problems: (1) evolving DBNs in the absence of domain experts and (2) mitigating computational intensity (or NP-hard) problems with economic scalability. Most notably, the major contributions of this thesis are as follows: the development of a new class of temporal probabilistic modeling (EDBN), whose architecture facilitates the demonstration of its emergent situation awareness (ESA) and emergent future situation awareness (EFSA) technologies. The ESA and its variant reveal hidden patterns over current and future time steps respectively. Among other contributions are the development and integration of an economic scalable framework called dynamic memory management in adaptive learning (DMMAL) into the architecture of the EDBN to emerge such network models from environments captured as massive datasets; the design of configurable agent actuators; adaptive operators; representative partitioning algorithms which facilitate the scalability framework; formal development and optimization of genetic algorithm (GA) to emerge optimal Bayesian networks from datasets, with emphasis on backtracking avoidance; and diverse applications of EDBN technologies such as business intelligence, revealing trends of insulin dose to medical patients, water quality management, project profitability analysis, sensor networks, etc. To ensure the universality and reproducibility of our architecture, we methodically conducted experiments using varied real-life datasets and publicly available machine learning datasets mostly from the University of California Irvine (UCI) repository.
2

Field D* Pathfinding in Weighted Simplicial Complexes

Perkins, Simon James 01 September 2014 (has links)
The development of algorithms to efficiently determine an optimal path through a complex environment is a continuing area of research within Computer Science. When such environments can be represented as a graph, established graph search algorithms, such as Dijkstra’s shortest path and A*, can be used. However, many environments are constructed from a set of regions that do not conform to a discrete graph. The Weighted Region Problem was proposed to address the problem of finding the shortest path through a set of such regions, weighted with values representing the cost of traversing the region. Robust solutions to this problem are computationally expensive since finding shortest paths across a region requires expensive minimisation. Sampling approaches construct graphs by introducing extra points on region edges and connecting them with edges criss-crossing the region. Dijkstra or A* are then applied to compute shortest paths. The connectivity of these graphs is high and such techniques are thus not particularly well suited to environments where the weights and representation frequently change. The Field D* algorithm, by contrast, computes the shortest path across a grid of weighted square cells and has replanning capabilites that cater for environmental changes. However, representing an environment as a weighted grid (an image) is not space-efficient since high resolution is required to produce accurate paths through areas containing features sensitive to noise. In this work, we extend Field D* to weighted simplicial complexes – specifically – triangulations in 2D and tetrahedral meshes in 3D. Such representations offer benefits in terms of space over a weighted grid, since fewer triangles can represent polygonal objects with greater accuracy than a large number of grid cells. By exploiting these savings, we show that Triangulated Field D* can produce an equivalent path cost to grid-based Multi-resolution Field D*, using up to an order of magnitude fewer triangles over grid cells and visiting an order of magnitude fewer nodes. Finally, as a practical demonstration of the utility of our formulation, we show how Field D* can be used to approximate a distance field on the nodes of a simplicial complex, and how this distance field can be used to weight the simplicial complex to produce contour-following behaviour by shortest paths computed with Field D*.
3

Accelerated cooperative co-evolution on multi-core architectures

Moyo, Edmore 01 February 2019 (has links)
The Cooperative Co-Evolution model has been used in Evolutionary Computation to optimize the training of artificial neural networks (ANNs). This architecture has proven to be a useful extension to domains such as Neuro-Evolution (NE), which is the training of ANNs using concepts of natural evolution. However, there is a need for real-time systems and the ability to solve more complex tasks which has prompted a further need to optimize these Cooperative Co-Evolution methods. Cooperative Co-Evolution methods consist of a number of phases, however the evaluation phase is still the most compute intensive phase, for some complex tasks taking as long as weeks to complete. This study uses NE as a test case study and we design a parallel Cooperative Co-Evolution processing framework and implement the optimized serial and parallel versions using the Golang (Go) programming language. Go is a multi-core programming language with first-class constructs, channels and goroutines, that make it well suited to parallel programming. Our study focuses on Enforced Subpopulations (ESP) for single-agent systems and Multi-Agent ESP for multi-agent systems. We evaluate the parallel versions in the benchmark tasks; double pole balancing and prey-capture, for single and multi-agent systems respectively, in tasks of increasing complexity. We observe a maximum speed-up of 20x for the parallel Multi-Agent ESP implementation over our single core optimized version in the prey-capture task and a maximum speedup of 16x for ESP in the harder version of double pole balancing task. We also observe linear speed-ups for the difficult versions of the tasks for a certain range of cores, indicating that the Go implementations are efficient and that the parallel speed-ups are better for more complex tasks. We find that in complex tasks, the Cooperative Co-Evolution Neuro-Evolution (CCNE) methods are amenable to multi-core acceleration, which provides a basis for the study of even more complex Cooperative Co-Evolution methods in a wider range of domains.
4

Anomaly Detection and Prediction of Human Actions in a Video Surveillance Environment

Spasic, Nemanja 01 December 2007 (has links)
World wide focus has over the years been shifting towards security issues, not in least due to recent world wide terrorist activities. Several researchers have proposed state of the art surveillance systems to help with some of the security issues with varying success. Recent studies have suggested that the ability of these surveillance systems to learn common environmental behaviour patterns as wells as to detect and predict unusual, or anomalous, activities based on those learnt patterns are possible improvements to those systems. In addition, some of these surveillance systems are still run by human operators, who are prone to mistakes and may need some help from the surveillance systems themselves in detection of anomalous activities. This dissertation attempts to address these suggestions by combining the fields of Image Understanding and Artificial Intelligence, specifically Bayesian Networks, to develop a prototype video surveillance system that can learn common environmental behaviour patterns, thus being able to detect and predict anomalous activity in the environment based on those learnt patterns. In addition, this dissertation aims to show how the prototype system can adapt to these anomalous behaviours and integrate them into its common patterns over a prolonged occurrence period. The prototype video surveillance system showed good performance and ability to detect, predict and integrate anomalous activity in the evaluation tests that were performed using a volunteer in an experimental indoor environment. In addition, the prototype system performed quite well on the PETS 2002 dataset 1, which it was not designed for. The evaluation procedure used some of the evaluation metrics commonly used on the PETS datasets. Hence, the prototype system provides a good approach to anomaly detection and prediction using Bayesian Networks trained on common environmental activities.
5

Improving searchability of automatically transcribed lectures through dynamic language modelling

Marquard, Stephen 01 December 2012 (has links)
Recording university lectures through lecture capture systems is increasingly common. However, a single continuous audio recording is often unhelpful for users, who may wish to navigate quickly to a particular part of a lecture, or locate a specific lecture within a set of recordings. A transcript of the recording can enable faster navigation and searching. Automatic speech recognition (ASR) technologies may be used to create automated transcripts, to avoid the significant time and cost involved in manual transcription. Low accuracy of ASR-generated transcripts may however limit their usefulness. In particular, ASR systems optimized for general speech recognition may not recognize the many technical or discipline-specific words occurring in university lectures. To improve the usefulness of ASR transcripts for the purposes of information retrieval (search) and navigating within recordings, the lexicon and language model used by the ASR engine may be dynamically adapted for the topic of each lecture. A prototype is presented which uses the English Wikipedia as a semantically dense, large language corpus to generate a custom lexicon and language model for each lecture from a small set of keywords. Two strategies for extracting a topic-specific subset of Wikipedia articles are investigated: a naïve crawler which follows all article links from a set of seed articles produced by a Wikipedia search from the initial keywords, and a refinement which follows only links to articles sufficiently similar to the parent article. Pair-wise article similarity is computed from a pre-computed vector space model of Wikipedia article term scores generated using latent semantic indexing. The CMU Sphinx4 ASR engine is used to generate transcripts from thirteen recorded lectures from Open Yale Courses, using the English HUB4 language model as a reference and the two topic-specific language models generated for each lecture from Wikipedia. Three standard metrics – Perplexity, Word Error Rate and Word Correct Rate – are used to evaluate the extent to which the adapted language models improve the searchability of the resulting transcripts, and in particular improve the recognition of specialist words. Ranked Word Correct Rate is proposed as a new metric better aligned with the goals of improving transcript searchability and specialist word recognition. Analysis of recognition performance shows that the language models derived using the similarity-based Wikipedia crawler outperform models created using the naïve crawler, and that transcripts using similarity-based language models have better perplexity and Ranked Word Correct Rate scores than those created using the HUB4 language model, but worse Word Error Rates. It is concluded that English Wikipedia may successfully be used as a language resource for unsupervised topic adaptation of language models to improve recognition performance for better searchability of lecture recording transcripts, although possibly at the expense of other attributes such as readability.
6

Reinforcement Learning with Recurrent Neural Networks

Schäfer, Anton Maximilian 20 November 2008 (has links)
Controlling a high-dimensional dynamical system with continuous state and action spaces in a partially unknown environment like a gas turbine is a challenging problem. So far often hard coded rules based on experts´ knowledge and experience are used. Machine learning techniques, which comprise the field of reinforcement learning, are generally only applied to sub-problems. A reason for this is that most standard RL approaches still fail to produce satisfactory results in those complex environments. Besides, they are rarely data-efficient, a fact which is crucial for most real-world applications, where the available amount of data is limited. In this thesis recurrent neural reinforcement learning approaches to identify and control dynamical systems in discrete time are presented. They form a novel connection between recurrent neural networks (RNN) and reinforcement learning (RL) techniques. RNN are used as they allow for the identification of dynamical systems in form of high-dimensional, non-linear state space models. Also, they have shown to be very data-efficient. In addition, a proof is given for their universal approximation capability of open dynamical systems. Moreover, it is pointed out that they are, in contrast to an often cited statement, well able to capture long-term dependencies. As a first step towards reinforcement learning, it is shown that RNN can well map and reconstruct (partially observable) MDP. In the so-called hybrid RNN approach, the resulting inner state of the network is then used as a basis for standard RL algorithms. The further developed recurrent control neural network combines system identification and determination of an optimal policy in one network. In contrast to most RL methods, it determines the optimal policy directly without making use of a value function. The methods are tested on several standard benchmark problems. In addition, they are applied to different kinds of gas turbine simulations of industrial scale.

Page generated in 0.0218 seconds