• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 76
  • 69
  • 27
  • 3
  • 1
  • Tagged with
  • 1575
  • 256
  • 191
  • 127
  • 122
  • 115
  • 96
  • 94
  • 90
  • 79
  • 71
  • 60
  • 60
  • 59
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Speech and neural network dynamics

Renals, Stephen John January 1990 (has links)
This thesis is concerned with two principal issues. Firstly the radial basis functions (RBF) network is introduced and its properties related to other statistical and neural network classifiers. Results from a series of speech recognition experiments, using this network architecture, are reported. These experiments included a continuous speech recognition task with a 571 word lexicon. Secondly, a study of the dynamics of a simple recurrent network model is presented. This study was performed numerically, via a survey of network power spectra and a detailed investigation of the dynamics displayed by a particular network. Word and sentence recognition errors are reported for a continuous speech recognition system using RBF network phoneme modelling with Viterbi smoothing, using either a restricted grammar or no grammar whatsoever. In a cytopathology task domain the best RBF/Viterbi system produced first choice word errors of 6% and sentence errors of 14%, using a grammar of perplexity 6. This compares with word errors of 4% and sentence errors of 8% using the best CSTR hidden Markov model configuration. RBF networks were also used for a static vowel labelling task using hand-segmented vowels excised from continuous speech. Results were not worse than those obtained using statistical classifiers. The second part of this thesis is a computational study of the dynamics of a recurrent neural network model. Two investigations were undertaken. Firstly, a survey of network power spectra was used to map out the temporal activity of this network model (within a four dimensional parameter space) via summary statistics of the network power spectra. Secondly, the dynamics of a particular network were investigated. The dynamics were analysed using bifurcation diagrams, power spectra, the computation of Liapunov exponents and fractal dimensions and the plotting of 2-dimensional attractor projections. Complex dynamical behaviour was observed including Hopf bifurcations, the Ruell-Takens-Newhouse route to chaos with mode-locking at rational winding numbers, the period-doubling route to chaos and the presence of multiple coexisting attractors.
32

A framework of hierarchy for neural theory

Vellacott, Oliver R. January 1991 (has links)
There is currently no generally-accepted theory explaining how neural systems realise complex function. Indeed, it is believed by some that neural systems are fundamentally opaque. A framework of hierarchy is proposed as the basis of neural theory. By the application of hierarchy to neural systems it is possible to explain how complex function is computed. At the primitive (hardware) level it is only possible to understand the computation of primitive functions. To understand the computation of higher level function it is necessary to abstract primitive function, via an arbitrary number of intermediate levels of complexity, to the appropriate level of abstraction. Application of the framework is facilitated by a software tool which implements a specification as a neural system, to which training can then be applied. This specification is hierarchical, and is described in a fully distributed, object-oriented style. Networks constructed by this method are not restricted to any of the traditional neural models. The class of topologies which may be implemented is unrestricted. The framework is applied to the recognition of numberplates. This practical demonstration shows that (a) hierarchy enables neural computation of complex function to be understood; (b) the application of hierarchy allows the integration of specification and learning as methods of implementation; and (c) the framework facilitates the scaling-up of neural systems.
33

An investigation of practical issues in translating algorithms based on back-propagation into analogue VLSI circuits

Woodburn, Robin January 1996 (has links)
One of the most widely-used artificial neural networks is the multi-layer perceptron, trained by error-back-propagation (the 'back-propagation algorithm'). Commonly, the network is implemented as a serial-computer simulation, but there has been considerable interest in translating it into hardware. The most difficult translation into analogue VLSI is the 'learning' part of the algorithm, that is the part which involves calculating the output errors and making appropriate modifications to the analogue weights representing the connections between nodes. For this reason, most analogue hardware implementations train weights held off-chip in a digital representation; the weights are converted to an analogue representation for storage on the chip which comprises the network. This thesis examines the Virtual Targets algorithm, based on back-propagation, but with some modifications which render it more amenable to translation into analogue VLSI circuits which can 'learn on-chip'. I describe several circuits, designed to exploit our research group's pulse-stream approach to analogue VLSI, which provide four-quadrant multiplication, and calculate differences, signs and error-derivatives. Results, from simulation and from a chip fabricated with the circuits, are given. A consideration of other approaches to the problem of learning on-chip makes it clear that key issues are weight-storage, and a means of modifying the weights. I explain why calculating exact weight-changes is difficult, and give the results of simulation experiments leading to a further simplification of the Virtual Targets algorithm which makes it possible to train the network using fixed increments and decrements of the weights. I show the results of tests of circuits on a second chip, designed with implementation of the entire algorithm in mind, and assess the likelihood of such an implementation being successful. I place this analysis in the context of the search for 'intelligent' machines, and ask how far designs such as my own might contribute to such a machine. I also make some suggestions on the most fruitful directions for analogue designs of artificial neural networks.
34

Probabilistic fuzzy logic framework in reinforcement learning for decision making

Hinojosa, William January 2010 (has links)
This dissertation focuses on the problem of uncertainty handling during learning by agents dealing in stochastic environments by means of reinforcement learning. Most previous investigations in reinforcement learning have proposed algorithms to deal with the learning performance issues but neglecting the uncertainty present in stochastic environments. Reinforcement learning is a valuable learning method when a system requires a selection of actions whose consequences emerge over long periods for which input-output data are not available. In most combinations of fuzzy systems with reinforcement learning, the environment is considered deterministic. However, for many cases, the consequence of an action may be uncertain or stochastic in nature. This work proposes a novel reinforcement learning approach combined with the universal function approximation capability of fuzzy systems within a probabilistic fuzzy logic theory framework, where the information from the environment is not interpreted in a deterministic way as in classic approaches but rather, in a statistical way that considers a probability distribution of long term consequences. The generalized probabilistic fuzzy reinforcement learning (GPFRL) method, presented in this dissertation, is a modified version of the actor-critic learning architecture where the learning is enhanced by the introduction of a probability measure into the learning structure where an incremental gradient descent weight- updating algorithm provides convergence. XXIABSTRACT Experiments were performed on simulated and real environments based on a travel planning spoken dialogue system. Experimental results provided evidence to support the following claims: first, the GPFRL have shown a robust performance when used in control optimization tasks. Second, its learning speed outperforms most of other similar methods. Third, GPFRL agents are feasible and promising for the design of adaptive behaviour robotics systems.
35

GECAF : a generic and extensible framework for developing context-aware smart environments

Sabagh, Angham January 2011 (has links)
The new pervasive and context-aware computing models have resulted in the development of modern environments which are responsive to the changing needs of the people who live, work or socialise in them. These are called smart envirnments and they employ high degree of intelligence to consume and process information in order to provide services to users in accordance with their current needs. To achieve this level of intelligence, such environments collect, store, represent and interpret a vast amount of information which describes the current context of their users. Since context-aware systems differ in the way they interact with users and interpret the context of their entities and the actions they need to take, each individual system is developed in its own way with no common architecture. This fact makes the development of every context aware system a challenge. To address this issue, a new and generic framework has been developed which is based on the Pipe-and-Filter software architectural style, and can be applied to many systems. This framework uses a number of independent components that represent the usual functions of any context-aware system. These components can be configured in different arrangements to suit the various systems' requirements. The framework and architecture use a model to represent raw context information as a function of context primitives, referred to as Who, When, Where, What and How (4W1H). Historical context information is also defined and added to the model to predict some actions in the system. The framework uses XML code to represent the model and describes the sequence in which context information is being processed by the architecture's components (or filters). Moreover, a mechanism for describing interpretation rules for the purpose of context reasoning is proposed and implemented. A set of guidelines is provided for both the deployment and rule languages to help application developers in constructing and customising their own systems using various components of the new framework. To test and demonstrate the functionality of the generic architecture, a smart classroom environment has been adopted as a case study. An evaluation of the new framework has also been conducted using two methods: quantitative and case study driven evaluation. The quantitative method used information obtained from reviewing the literature which is then analysed and compared with the new framework in order to verify the completeness of the framework's components for different xiisituations. On the other hand, in the case study method the new framework has been applied in the implementation of different scenarios of well known systems. This method is used for verifying the applicability and generic nature of the framework. As an outcome, the framework is proven to be extensible with high degree of reusability and adaptability, and can be used to develop various context-aware systems.
36

Metanorms, topologies, and adaptive punishment in norm emergence

Mahmoud, Samhar January 2013 (has links)
Norms provide a means to regulate the behaviour of the members of a society, organisa¬tion or system. While much work has been done on various aspects of norms, normative systems and normative behaviour, this work has been limited in several respects. In particular, the problems of norm emergence have only recently begun to be considered, with existing work adopting only simple structural models. This relates to two crucial issues that have not adequately been addressed. First, existing models of norms assume that sanctions are static, and do not change in relation to relevant information about the violator, the situation or the history. Yet, typically there is information available that can significantly impact on the nature of such sanctions, and can even allow so¬phisticated sanctioning structures that achieve more effective regulation. Second, work on norm emergence has typically assumed simple topological structures of agents, if any at all, yet real computational systems, in which norms are relevant, such as peer-to-peer systems and wireless sensor networks, may have topologies of varying degrees of sophistication. These topologies constrain potential relationships between agents, limiting the observation of violations, and possibly also limiting the kind of sanctions than may be imposed. In this thesis, therefore, we seek to address these problems in support of more effective norm-regulated systems, by developing mechanisms that can incentivise cooperative behaviour in societies of self-interested agents.
37

Efficient adaptive multi-granularity service composition

Barakat, Lina January 2013 (has links)
Despite the tremendous benefits of the dynamic, service-oriented approach to build composite applications, it also brings great challenges. In particular, the run-time selection of the most suitable services for an application in a timely manner is not trivial, since many providers could be competing for the same type of service, but at different quality of service levels. Due to possible dependencies among services, such quality offerings could also vary at a single service level. This is further complicated by the fact that the available services may offer to achieve the tasks required at varying functional abstractions. Moreover, services are highly dynamic and unreliable in nature, which can cause serious problems to the execution of workflows relying on such services. In this thesis, we contribute towards addressing these challenges, and achieve a more efficient, robust, and optimal dynamic composition process. Specifically, through a rich collection of alternative planning options, we allow services at various granularity levels to be incorporated into the selection process. We also enrich the quality model of services with inter-service dependency awareness, to produce correct quality estimations. Furthermore, we develop efficiency-boosting techniques facilitating a scalable service selection process without affecting optimality, even in the case where the search space experiences complex dependencies among services. In the face of environment dynamism and uncertainty, we achieve an early and efficient adaptive behaviour, which ensures a valid, optimal, and satisfactory solution, in spite of high environment volatility, and without causing disruption to application execution. The effectiveness of all the algorithms and techniques developed in this thesis is demonstrated analytically and empirically. The latter is achieved both on randomly generated datasets, and through a case study evaluation applied in the context of learning object composition.
38

Helmholtz machines and non-stationary data fusion

Dalzell, Ryan January 2001 (has links)
This thesis proposes that the autonomous model-building capability of the Helmholtz Machine neural network can be used to reduce the effects of non-stationarity in a data fusion application. The particular application area studied in this work is sensor drift in a sensor array. A solution is attempted by tracking the drift in a subset of the sensors in an array by adapting the neural network's model of the sensor data without affecting the properties of the model. It is shown empirically that the original binary valued unit Helmholtz Machine is suitable for this task in only a limited manner. A new network is therefore introduced: the Discrete valued Helmholtz Machine. Although this network is not found to realise the original proposition it provides valuable new understanding of Helmholtz Machines and their associated Wake-Sleep learning algorithm.
39

Automatic facial recognition based on facial feature analysis

Sutherland, Kenneth Gavin Neil January 1992 (has links)
As computerised storage and control of information is now a reality, it is increasingly necessary that personal identity verification be used as the automated method of access control to this information. Automatic facial recognition is now being viewed as an ideal solution to the problem of unobtrusive, high security, personal identity verification. However, few researchers have yet managed to produce a face recognition algorithm capable of performing successful recognition, without requiring substantial data storage for the personal information. This thesis reports the development of a feature and measurement based system of facial recognition, capable of storing the intrinsics of a facial image in a very small amount of data. The parameterisation of the face into its component characteristics is essential to both human and automated face recognition. Psychological and behavioural research has been reviewed in this thesis in an attempt to establish any key pointers, in human recognition, which can be exploited for use in an automated system. A number of different methods of automated facial recognition which perform facial parameterisation in a variety of different ways are discussed. In order to store the relevant characteristics and measurements about the face, the pertinent facial features must be precisely located from within the image data. A novel technique of Limited Feature Embedding, which locates the primary facial features with a minimum of computational load, has been successfully designed and implemented. The location process has been extended to isolate a number of other facial features. With regard to the earlier review, a new method of facial parameterisation has been devised. Incorporated in this <i>feature set</i> are local feature data and structural measurement information about the face. A probabilistic method of inter-person comparisons which facilitates recognition even in the presence of expressional and temporal changes, has been successfully implemented. Comprehensive results of this novel recognition technique are presented for a variety of different operating conditions.
40

The evolutionary design of digital VLSI hardware

Thomson, Robert January 2005 (has links)
In this thesis, multi-objective evolutionary algorithms are applied to the design of efficient digital ASIC core designs. Specifically, the thesis addresses the evolutionary synthesis of multiplierless linear filters, multiplierless linear transforms, and polynomial transform designs. The designs are constructed from high-level arithmetic components such as adders and subtracters, according to a user-supplied behavioural specification. The designs are evaluated according to three different objectives: functionality, low area requirements, and low longest-path delay. In order to evaluate these objectives, accurate hardware models are developed. Evolutionary algorithms are often applied to scheduling problems. This thesis investigates the possibility of performing scheduling and allocation in parallel with circuit evolution. Two possibilities are considered: scheduling for sequential operation and pipeline scheduling. The choice of solution representation and evolutionary operators can have an enormous impact on the performance of an evolutionary algorithm. In this thesis, solutions are represented with graphs. Graphs are found to be a powerful and intuitive representation for circuit designs, although the complexity of the evolutionary operators tends to be higher than with other encodings. Various graph evolutionary operators are developed, including a novel non-destructive graph crossover operator. This thesis also proposes a class of local search operators. These operators can significantly improve the performance of an EA. The improvement is achieved in two ways: by reducing the computational cost of evaluating a design, and by automatically finding optimal settings for some of the design parameters. These local search operators are initially applied to linear designs, and are later adapted for devices with polynomial responses.

Page generated in 0.0224 seconds