• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Checking the integrity of Global Positioning Recommended Minimum (GPRMC) sentences using Artificial Neural Network (ANN)

Hussain, Tayyab January 2009 (has links)
<p>In this study, Artificial Neural Network (ANN) is used to check the integrity of the Global Positioning Recommended Minimum (GPRMC) sentences. The GPRMC sentences are the most common sentences transmitted by the Global Positioning System (GPS) devices. This sentence contains nearly every thing a GPS application needs. The data integrity is compared on the basis of the classification accuracy and the minimum error obtained using the ANN. The ANN requires data to be presented in a certain format supported by the learning process of the network. Therefore a certain amount of data processing is needed before training patterns are presented to the network. The data pre processing is done by the design and development of different algorithms in C# using Visual Studio.Net 2003. This study uses the BackPropagation (BP) feed forward multilayer ANN algorithm with the learning rate and the momentum as its parameters. The results are analyzed based on different ANN architectures, classification accuracy, Sum of Square Error (SSE), variables sensitivity analysis and training graph. The best obtained ANN architecture shows a good performance with the selection classification of 96.79 % and the selection sum of square error 0.2022. This study uses the ANN tool Trajan 6.0 Demonstrator.</p>
2

Checking the integrity of Global Positioning Recommended Minimum (GPRMC) sentences using Artificial Neural Network (ANN)

Hussain, Tayyab January 2009 (has links)
In this study, Artificial Neural Network (ANN) is used to check the integrity of the Global Positioning Recommended Minimum (GPRMC) sentences. The GPRMC sentences are the most common sentences transmitted by the Global Positioning System (GPS) devices. This sentence contains nearly every thing a GPS application needs. The data integrity is compared on the basis of the classification accuracy and the minimum error obtained using the ANN. The ANN requires data to be presented in a certain format supported by the learning process of the network. Therefore a certain amount of data processing is needed before training patterns are presented to the network. The data pre processing is done by the design and development of different algorithms in C# using Visual Studio.Net 2003. This study uses the BackPropagation (BP) feed forward multilayer ANN algorithm with the learning rate and the momentum as its parameters. The results are analyzed based on different ANN architectures, classification accuracy, Sum of Square Error (SSE), variables sensitivity analysis and training graph. The best obtained ANN architecture shows a good performance with the selection classification of 96.79 % and the selection sum of square error 0.2022. This study uses the ANN tool Trajan 6.0 Demonstrator.
3

A feed forward neural network approach for matrix computations

Al-Mudhaf, Ali F. January 2001 (has links)
A new neural network approach for performing matrix computations is presented. The idea of this approach is to construct a feed-forward neural network (FNN) and then train it by matching a desired set of patterns. The solution of the problem is the converged weight of the FNN. Accordingly, unlike the conventional FNN research that concentrates on external properties (mappings) of the networks, this study concentrates on the internal properties (weights) of the network. The present network is linear and its weights are usually strongly constrained; hence, complicated overlapped network needs to be construct. It should be noticed, however, that the present approach depends highly on the training algorithm of the FNN. Unfortunately, the available training methods; such as, the original Back-propagation (BP) algorithm, encounter many deficiencies when applied to matrix algebra problems; e. g., slow convergence due to improper choice of learning rates (LR). Thus, this study will focus on the development of new efficient and accurate FNN training methods. One improvement suggested to alleviate the problem of LR choice is the use of a line search with steepest descent method; namely, bracketing with golden section method. This provides an optimal LR as training progresses. Another improvement proposed in this study is the use of conjugate gradient (CG) methods to speed up the training process of the neural network. The computational feasibility of these methods is assessed on two matrix problems; namely, the LU-decomposition of both band and square ill-conditioned unsymmetric matrices and the inversion of square ill-conditioned unsymmetric matrices. In this study, two performance indexes have been considered; namely, learning speed and convergence accuracy. Extensive computer simulations have been carried out using the following training methods: steepest descent with line search (SDLS) method, conventional back propagation (BP) algorithm, and conjugate gradient (CG) methods; specifically, Fletcher Reeves conjugate gradient (CGFR) method and Polak Ribiere conjugate gradient (CGPR) method. The performance comparisons between these minimization methods have demonstrated that the CG training methods give better convergence accuracy and are by far the superior with respect to learning time; they offer speed-ups of anything between 3 and 4 over SDLS depending on the severity of the error goal chosen and the size of the problem. Furthermore, when using Powell's restart criteria with the CG methods, the problem of wrong convergence directions usually encountered in pure CG learning methods is alleviated. In general, CG methods with restarts have shown the best performance among all other methods in training the FNN for LU-decomposition and matrix inversion. Consequently, it is concluded that CG methods are good candidates for training FNN of matrix computations, in particular, Polak-Ribidre conjugate gradient method with Powell's restart criteria.

Page generated in 0.0755 seconds