• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 7
  • 7
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Network Training for Continuous Speech Recognition

Alphonso, Issac John 13 December 2003 (has links)
Spoken language processing is one of the oldest and most natural modes of information exchange between humans beings. For centuries, people have tried to develop machines that can understand and produce speech the way humans do so naturally. The biggest problem in our inability to model speech with computer programs and mathematics results from the fact that language is instinctive, whereas, the vocabulary and dialect used in communication are learned. Human beings are genetically equipped with the ability to learn languages, and culture imprints the vocabulary and dialect on each member of society. This thesis examines the role of pattern classification in the recognition of human speech, i.e., machine learning techniques that are currently being applied to the spoken language processing problem. The primary objective of this thesis is to create a network training paradigm that allows for direct training of multi-path models and alleviates the need for complicated systems and training recipes. A traditional trainer uses an expectation maximization (EM)based supervised training framework to estimate the parameters of a spoken language processing system. EM-based parameter estimation for speech recognition is performed using several complicated stages of iterative reestimation. These stages typically are prone to human error. The network training paradigm reduces the complexity of the training process while retaining the robustness of the EM-based supervised training framework. The hypothesis of this thesis is that the network training paradigm can achieve comparable recognition performance to a traditional trainer while alleviating the need for complicated systems and training recipes for spoken language processing systems.
2

An Analysis of Particle Swarm Optimizers

Van den Bergh, Frans 03 May 2006 (has links)
Many scientific, engineering and economic problems involve the optimisation of a set of parameters. These problems include examples like minimising the losses in a power grid by finding the optimal configuration of the components, or training a neural network to recognise images of people's faces. Numerous optimisation algorithms have been proposed to solve these problems, with varying degrees of success. The Particle Swarm Optimiser (PSO) is a relatively new technique that has been empirically shown to perform well on many of these optimisation problems. This thesis presents a theoretical model that can be used to describe the long-term behaviour of the algorithm. An enhanced version of the Particle Swarm Optimiser is constructed and shown to have guaranteed convergence on local minima. This algorithm is extended further, resulting in an algorithm with guaranteed convergence on global minima. A model for constructing cooperative PSO algorithms is developed, resulting in the introduction of two new PSO-based algorithms. Empirical results are presented to support the theoretical properties predicted by the various models, using synthetic benchmark functions to investigate specific properties. The various PSO-based algorithms are then applied to the task of training neural networks, corroborating the results obtained on the synthetic benchmark functions. / Thesis (PhD)--University of Pretoria, 2007. / Computer Science / Unrestricted
3

A Neural Network Classifier for Spectral Pattern Recognition. On-Line versus Off-Line Backpropagation Training

Staufer-Steinnocher, Petra, Fischer, Manfred M. 12 1900 (has links) (PDF)
In this contributon we evaluate on-line and off-line techniques to train a single hidden layer neural network classifier with logistic hidden and softmax output transfer functions on a multispectral pixel-by-pixel classification problem. In contrast to current practice a multiple class cross-entropy error function has been chosen as the function to be minimized. The non-linear diffierential equations cannot be solved in closed form. To solve for a set of locally minimizing parameters we use the gradient descent technique for parameter updating based upon the backpropagation technique for evaluating the partial derivatives of the error function with respect to the parameter weights. Empirical evidence shows that on-line and epoch-based gradient descent backpropagation fail to converge within 100,000 iterations, due to the fixed step size. Batch gradient descent backpropagation training is superior in terms of learning speed and convergence behaviour. Stochastic epoch-based training tends to be slightly more effective than on-line and batch training in terms of generalization performance, especially when the number of training examples is larger. Moreover, it is less prone to fall into local minima than on-line and batch modes of operation. (authors' abstract) / Series: Discussion Papers of the Institute for Economic Geography and GIScience
4

Optimization in an Error Backpropagation Neural Network Environment with a Performance Test on a Pattern Classification Problem

Fischer, Manfred M., Staufer-Steinnocher, Petra 03 1900 (has links) (PDF)
Various techniques of optimizing the multiple class cross-entropy error function to train single hidden layer neural network classifiers with softmax output transfer functions are investigated on a real-world multispectral pixel-by-pixel classification problem that is of fundamental importance in remote sensing. These techniques include epoch-based and batch versions of backpropagation of gradient descent, PR-conjugate gradient and BFGS quasi-Newton errors. The method of choice depends upon the nature of the learning task and whether one wants to optimize learning for speed or generalization performance. It was found that, comparatively considered, gradient descent error backpropagation provided the best and most stable out-of-sample performance results across batch and epoch-based modes of operation. If the goal is to maximize learning speed and a sacrifice in generalisation is acceptable, then PR-conjugate gradient error backpropagation tends to be superior. If the training set is very large, stochastic epoch-based versions of local optimizers should be chosen utilizing a larger rather than a smaller epoch size to avoid inacceptable instabilities in the generalization results. (authors' abstract) / Series: Discussion Papers of the Institute for Economic Geography and GIScience
5

Fast Computation of Wide Neural Networks

Vineeth Chigarangappa Rangadhamap (5930585) 02 January 2019 (has links)
<div>The recent advances in articial neural networks have demonstrated competitive performance of deep neural networks (and it is comparable with humans) on tasks like image classication, natural language processing and time series classication. These large scale networks pose an enormous computational challenge, especially in resource constrained devices. The current work proposes a targeted-rank based framework for accelerated computation of wide neural networks. It investigates the problem of rank-selection for tensor ring nets to achieve optimal network compression. When applied to a state of the art wide residual network, namely WideResnet, the framework yielded a signicant reduction in computational time. The optimally compressed non-parallel WideResnet is faster to compute on a CPU by almost 2x with only 5% degradation in accuracy when compared to a non-parallel implementation of uncompressed WideResnet.</div>
6

Unstructured Road Recognition And Following For Mobile Robots Via Image Processing Using Anns

Dilan, Askin Rasim 01 June 2010 (has links) (PDF)
For an autonomous outdoor mobile robot ability to detect roads existing around is a vital capability. Unstructured roads are among the toughest challenges for a mobile robot both in terms of detection and navigation. Even though mobile robots use various sensors to interact with their environment, being a comparatively low-cost and rich source of information, potential of cameras should be fully utilized. This research aims to systematically investigate the potential use of streaming camera images in detecting unstructured roads. The investigation focused on the use of methods employing Artificial Neural Networks (ANNs). An exhaustive test process is followed where different kernel sizes and feature vectors are varied systematically where trainings are carried out via backpropagation in a feed-forward ANN. The thesis also claims a contribution in the creation of test data where truth images are created almost in realtime by making use of the dexterity of human hands. Various road profiles v ranging from human-made unstructured roads to trails are investigated. Output of ANNs indicating road regions is justified against the vanishing point computed in the scene and a heading vector is computed that is to keep the robot on the road. As a result, it is shown that, even though a robot cannot fully rely on camera images for heading computation as proposed, use of image based heading computation can provide a useful assistance to other sensors present on a mobile robot.
7

Realizace rozdělujících nadploch / The decision boundary

Gróf, Zoltán January 2012 (has links)
The main aim of this master's thesis is to describe the subject of the implementation of decision boundaries with the help of artificial neural networks. The objective is to present theoretical knowledge concerning this field and on practical examples prove these statements. The work contains basic theoretical description of the field of pattern recognition and the field of feature based representation of objects. A classificator working on the basis of Bayes decision is presented in this part, and other types of classificators are named as well. The work then deals with artificial neural networks in more detail; it contains a theoretical description of their function and their abilities in the creation of decision boundaries in the feature plane. Examples are shown from literature for the use of neural networks in corresponding problems. As part of this work, the program ANN-DeBC was created using Matlab, for the generation of practical results about the usage of feed-forward neural networks for the implementation of decision boundaries. The work contains a detailed description of this program, and the achieved results are presented and analyzed. It is shown as well, how artificial neural networks are creating decision boundaries in the form of geometrical shapes. The effects of the chosen topology of the neural network and the number of training samples on the success of the classification are observed, and the minimal values of these parameters are determined for the successful creation of decision boundaries at the individual examples. Furthermore, it's presented how the neural networks behave at the classification of realistically distributed training samples, and what methods can affect the shape of the created decision boundaries.

Page generated in 0.0984 seconds