• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 76
  • 69
  • 27
  • 3
  • 1
  • Tagged with
  • 1575
  • 256
  • 191
  • 127
  • 122
  • 115
  • 96
  • 94
  • 90
  • 79
  • 71
  • 60
  • 60
  • 59
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Analogue imprecision in MLPs implications and learning improvements

Edwards, Peter J. January 1994 (has links)
Analogue hardware implementations of Multi-Layer Perceptrons (MLP) have a limited <I>precision</I> that has a detrimental effect on the result of synaptic multiplication. At the same time however the <I>accuracy</I> of the circuits can be very high with good design. This thesis investigates the consequences of the imprecision on the performance of the MLP, examining whether it is accuracy or precision that is of importance in neural computation. The results of this thesis demonstrate that far from having a detrimental effect, the imprecision or <I>synaptic weight noise </I>enhances the performance of the solution. In particular the fault tolerance and generalisation ability are improved. In addition, under certain conditions, the learning trajectory of the training network is also improved. Through a mathematical analysis and subsequent verification experiments the enhancements are reported. Simulation experiments examine the underlying mechanisms and probe the limitations of the technique as an enhancement scheme. For a variety of problems, precision is shown to be significantly less important than accuracy. In fact imprecision can have beneficial effects on learning performance.
42

The use of non-volatile a-Si:H memory devices for synaptic weight storage in artificial neural networks

Holmes, Andrew J. January 1995 (has links)
This thesis describes the development of an ANN chip in which a-Si:H resistors are integrated with CMOS circuitry. This eliminates the need for external refresh or neuron circuitry required by ANN designs based on dynamic storage techniques. The a-Si:H memory technology was developed in collaboration with Dundee University and is in effect a programmable, non-volatile, semiconductor resistor. The device consists of a thin 1000A layer of a-Si:H sandwiched between vanadium and chromium electrodes. During the project a total of three test chips were designed and fabricated. The first chip was used to investigate the fabrication of memory devices on the surface of a CMOS wafer: previously all the test devices had been constructed on glass slides. Results from this chip showed that it was possible to fabricate programmable a-Si:H resistors on a CMOS chip. The second chip contained five different synapse designs all of which used the a-Si:H resistor as the memory element. The best of these was then used in the construction of the final ANN chip. This chip contained an 8 x 8 array of synapses and digital addressing, and required minimal support circuitry. Conclusions are drawn both about the performance of the a-Si:H memory device and the alternative approaches to non-volatile storage in ANN chips, and recommendations are made for future work in this area.
43

Detection performance for sampled-data signals in a grammar based pattern recognition system

Lin, Duncan T. T. January 1996 (has links)
The detection performance of a new syntactic pattern recognition system based on an augmented programmed grammar (APG) is investigated. APGs are a generalisation of programmed grammars where each production rule has a label, a core and two associated lists of labels stating which rule is to apply next. The core of an APG, provided by core-functions, allows functions additional to the regular phrase structure productions to be implemented. The ability to manipulate grammar variables with true functions adds intelligence and flexibility to the parser formulation. This investigation uses an improved APG recogniser developed from a prior design to achieve an enhanced noise tolerance capability. It is able to correctly recognise one-dimensional waveforms with a wide range of sizes or scale factors using a single grammatical representation. Parser recognition performance is obtained by applying Monte Carlo tests at various values of signal-to-noise ratio and different operation parameter settings. The acquired detection statistics reveal both the recognition response for different constitutions of input signal and the influence on performance due to the various operation parameters. An idea for modifying the transmitted waveform design to suppress the formation of false waveforms is subsequently developed. The detection statistics are endorsed by a theoretical analysis. Finally, the provision of a waveform-deviation tolerance capability is shown to improve the recognition of quadratic and linear waveform segments.
44

A genetic algorithm for automatic optical inspection

Mashohor, Syamsiah January 2006 (has links)
In high and medium production lines, conveyor belts are usually employed to speed up the manufacturing process. Therefore, placing a Printed Circuit Board (PCB) on a conveyor belt in high precision for visual inspection is very tedious and costly with aid of hardware. As a consequence, a robust image registration technique is crucial in order to register images of inspected PCBs with high quality but in low computational time. This thesis investigates the ability of Genetic Algorithm (GA) in optimising the image registration of whole PCBs to offer linearity in positioning whole board of PCBs on a conveyor belt during acquisition stage of an inspection process. A novel GA based image registration is developed and tailored for quality, reliability and speed. The performance of the GA based image registration determines the success of a special defect detection procedure which employs a combination of image processing functions. The procedure produces excellent results in detecting absence component defect and open solder joints on the whole inspected PCBs in any shape and size. The performance of this procedure is presented in this thesis implies the potential of this system integrated between GA based image registration and the defect detection procedure. The system proves that it is able to register images in high accuracy and consequently manage to produce high quality of defect detection results in reasonable computing time.
45

Compositional ecological modelling via dynamic constraint satisfaction with order-of-magnitude preferences

Keppens, Jeroen January 2002 (has links)
Compositional modelling is one of the most important knowledge-based approaches to automating domain model construction. However, its use has been limited to physical systems due to the specific presumptions made by existing techniques. Based on a critical survey of existing compositional modellers, the strengths and limitations of compositional modelling for its application in the ecological domain are identified and addressed. The thesis presents an approach for effectively building and (re-)using repositories of models of ecological systems, although the underlying methods are domainindependent. It works by translating the compositional modelling problem into a dynamic constraint satisfaction problem (DCSP). This enables the user of the compositional modeller to specify requirements to the model selection process and to find an appropriate model by the use of efficient DCSP solution techniques. In addition to hard dynamic constraints over the modelling choices, the ecologist/ user of the automated modeller may also have a set of preferences over these options. Because ecological models are typically gross abstractions of very complex and yet only partially understood systems, information on which modelling approach is better is limited, and opinions differ between ecologists. As existing preference calculi are not designed for reasoning with such information, a calculus of partially ordered preferences, rooted in order-of-magnitude reasoning, is also devised within this dissertation. The combination of the dynamic constraint satisfaction problem derived from compositional modelling with the preferences provided by the user, forms a novel type of constraint satisfaction problem: a dynamic preference constraint satisfaction problem (DPCSP). In this thesis, four algorithms to solve such DPCSPs are presented and experimental results on their performance discussed. The resulting algorithms to translate a compositional modelling problem into a DCSP, the order-of-magnitude preference calculus and one of the DPCSP solution algorithms constitute an automated compositional modeller. Its suitability for ecological model construction is demonstrated by applications to two sample domains: a set of small population dynamics models and a large model on Mediterranean vegetation growth. The corresponding knowledge bases and how they are used as part of compositional ecological modelling are explained in detail.
46

Presupposition and assertion in dynamic semantics : Part (I) The presupposition : a critical review of presupposition theory ; Part (II) The assertion : what comes first in dynamic semantics

Beaver, David Ian January 1995 (has links)
The Context Change Potential (CCP) model of presupposition, due primarily to Kartunnen and Heim, is formally elaborated and modified within a propositional dynamic logic, a first-order dynamic logic, and within a three sorted type theory. It is shown that the definitions of connectives and quantifiers can be motivated independently of the phenomenon of presupposition by consideration of the semantics of anaphora and epistemic modality (cf. the work of Groenendijk & Stokhof and Veltman), and that these independently motivated definitions provide a solution to the projection problem for presupposition. It is argued that with regard to the interaction between presupposition and quantification the solution is empirically superior to those in competing accounts. The treatment of epistemic modality is also shown to be superior to existing dynamic accounts, combining solutions to traditional model identity problems with an adequate treatment of presuppositions in quantified contexts. A compositional semantics which integrates dynamic treatments of quantification, anaphora, modality and presupposition is then specified for a fragment of English. Finally, a formal modal of global accommodation (cf. Lewis, Heim, van der Sandt) is defined, this model differing from previous accounts in being <I>non-structural</I>. This means that the accommodated material cannot be deduced from formal properties of the utterance alone, but is essentially dependent on world-knowledge and common sense reasoning. Thus the model provides an essentially pragmatic account in the style of Stalnaker. It is shown that the formal model provides both a general solution to the problem of the <I>in-formativeness</I> of presuppositions, and a specific solution to a problem within the CCP model, namely its tendency to yield innapropriately weak conditionalised presuppositions. It argued that a pragmatic model can provide superior predictions to any purely semantic theory of presupposition, and to any theory based on a purely structural account of <I>accommodation</I> or <I>cancellation</I> (cf. Gazdar).
47

Finite size effects in neural network algorithms

Barber, David January 1996 (has links)
One approach to the study of learning in neural networks within the physics community has been to use statistical mechanics to calculate the expected error that a network will make on a typical novel example, termed the generalisation error. Such average case analyses have been mainly carried out with recourse to the thermodynamic limit in which the size of the network is taken to infinity. For the case of a finite sized network, however, the error is not self-averaging <I>i.e.</I>, it remains dependent upon the actual set of examples used to train and test the network. The error estimated on a specific test set realisation, termed the test error, forms a finite sample approximation to the generalisation error. We present in this thesis a systematic examination of test error variances in finite sized networks trained by stochastic learning algorithms. Beginning with simple single layer systems, in particular, the linear perceptron, we calculate the test error variance arising from randomness in both the training examples and the stochastic Gibbs learning algorithm. This quantity enables us to examine the performance of networks in a limited data scenario, including the optimal partitioning of a data set into a training and testing set in order to minimize the average error that the network makes, whilst remaining confident that the average test error is representative. A detailed study of the variance of cross-validation errors is carried out, and a comparison made between different cross-validation schemes. We examine also the test error variance of the binary perceptron, comparing the results to the linear case. Employing the results for the variance of errors, we calculate how likely are worst case errors as derived from the PAC theory, finding that the probability of such worst case occurrences is extremely small. In addition, we study the effect of a finite system size on the on-line training of multi-layer networks, in which we track the dynamic evolution of the error variance under the stochastic gradient descent algorithm used to train the network on an increasing amount of data. We find that the hidden unit symmetries of the multi-layer network give rise to relatively large finite size effects around the point at which the symmetries are broken.
48

Role of biases in neural network models

West, Ansgar Heinrich Ludolf January 1997 (has links)
The capacity problem for multi-layer networks has proven especially elusive. Our calculation of the capacity of multi-layer networks built by constructive algorithms relies heavily on the existence of biases in the basic building block, the binary perceptron. It is the first time where the capacity is explicitly evaluated for large networks and finite stability. One finds that the constructive algorithms studied, a tiling-like algorithm and variants of the upstart algorithm, do not saturate the known Mitchison-Durbin bound. In supervised learning, a student network is presented with training examples in the form of input-output pairs, where the output is generated by a teacher network. The central question to be answered is the relation between the number of examples presented and the typical performance of the student in approximating the teacher rule, which is usually termed generalisation. The influence of biases in such a student-teacher scenario has been assessed for the two-layer soft-committee architecture, which is a universal approximator and already resembles applicable multi-layer network models, within the on-line learning paradigm, where training examples are presented serially. One finds that adjustable biases dramatically alter the learning behaviour. The suboptimal symmetric phase, which can easily dominate training for fixed biases, vanishes almost entirely for non-degenerate teacher biases. Furthermore, the extended model exhibits a much richer dynamical behaviour, exemplified especially by a multitude of (attractive) suboptimal fixed points even for realizable cases, causing the training to fail or be severely slowed down. In addition, in order to study possible improvements over gradient decent training, an adaptive back-propagation algorithm parameterised by a "temperature" is introduced, which enhances the ability of the student to distinguish between teacher nodes. This algorithm, which has been studied in the various learning stages, provides more effective symmetry breaking between hidden units and faster convergence to optimal generalisation.
49

Speech recognition in noise using weighted matching algorithms

Becerra Yoma, Nestor January 1998 (has links)
This thesis investigates the problem of automatic speech recognition in noise (additive and convolutional) by the development of Weighted Matching algorithms (WMA). The WMA approach relies on the fact that additive noise corrupts some segments of the speech signal more severely than others. As a result, WMA revises the classical concept of acoustic pattern matching in order to include the segmental signal to noise ratio (SNR) frame-by-frame. The problem of end-point detection is also addressed and a method based on autoregressive analysis of noise is also proposed for robust speech pulse detection. The technique is shown to be effective in increasing the discriminability between the speech signal and background noise. Modified versions of the Dynamic Time Warping (DTW) and Hidden Markov Model (HMM) algorithms are proposed and tested in combination with reliability in noise cancelling weighting firstly using a novel noise cancelling neural net (LIN-Lateral Inhibition Neural Net) and then spectral subtraction (SS). The reliability in noise cancelling is a function of the local SNR and tries to measure the reliability of the information provided by the noise cancelling technique. A model for additive noise is proposed with the suggestion that the hidden clean signal information should be treated as a stochastic variable. This model is applied to estimate the uncertainty in noise cancelling using SS in a Mel filter bank, and this uncertainty (inverse of reliability) is employed to compute the weighting coefficient to be used in the modified DTW or Viterbi (HMM) algorithms. This uncertainty (in the form of a variance) is mainly caused by the lack of knowledge about the phase difference between noise and clean signals. The model for additive noise also suggests that SS could be defined as being the expected value of the hidden clean signal energy in the log domain given the noisy energy and the noise energy estimation. The reliability in noise cancelling weighting is tested in an isolated word recognition task (digits) with several types of noise, and is shown to substantially reduce the error rate when SS is used to remove the additive noise using poor estimation of the corrupting signal. The weighted Viterbi (HMM) algorithm is compared and combined with state duration modelling. It is shown that weighting the time varying signal information requires only a low computational load and leads to better results than the introduction of temporal constraints in the recognition algorithm. In combination with temporal constraints, the weighted Viterbi algorithm results in a high recognition accuracy at moderate SNR's without an accurate noise model.
50

Model building in neural networks with hidden Markov models

Wynne-Jones, Michael January 1994 (has links)
This thesis concerns the automatic generation of architectures for neural networks and other pattern recognition models comprising many elements of the same type. The requirement for such models, with automatically determined topology and connectivity, arises from two needs. The first is the need to develop commercial applications of the technology without resorting to laborious trial and error with different network sizes; the second is the need, in large and complex pattern processing applications such as speech recognition, to optimise the allocation of computing resources for problem solving. The state of the art in adaptive architectures is reviewed, and a mechanism is proposed for adding new processing elements to models. The scheme is developed in the context of multi-layer perceptron networks, and is linked to the best network-pruning mechanism available using a numerical criterion with construction required at one extreme and pruning at the other. The construction mechanism does not work in the multi-layer perceptron for which it was developed, owing to the long-range effects occurring in many applications of these networks. It works demonstrably well in density estimation models based on Gaussian mixtures, which are of the same family as the increasingly popular radial basis function networks. The construction mechanism is applied to the initialization of the density estimators embedded in the states of a hidden Markov model for speaker-independent speech recognition, where it offers a considerable increase in recogniser performance, provided cross-validation is used to prevent over-training.

Page generated in 0.0221 seconds