1 |
An Enhanced Learning for Restricted Hopfield NetworksHalabian, Faezeh 10 June 2021 (has links)
This research investigates developing a training method for Restricted Hopfield Network (RHN) which is a subcategory of Hopfield Networks. Hopfield Networks are recurrent neural networks proposed in 1982 by John Hopfield. They are useful for different applications such as pattern restoration, pattern completion/generalization, and pattern association. In this study, we propose an enhanced training method for RHN which not only improves the convergence of the training sub-routine, but also is shown to enhance the learning capability of the network. Particularly, after describing the architecture/components of the model, we propose a modified variant of SPSA which in conjunction with back-propagation over time result in a training algorithm with an enhanced convergence for RHN. The trained network is also shown to achieve a better memory recall in the presence of noisy/distorted input. We perform several experiments, using various datasets, to verify the convergence of the training sub-routine, evaluate the impact of different parameters of the model, and compare the performance of the trained RHN in recreating distorted input patterns compared to conventional RBM and Hopfield network and other training methods.
|
2 |
Optimisation of the predictive ability of artificial neural network (ANN) models: A comparison of three ANN programs and four classes of training algorithm.Rowe, Raymond C., Plumb, A.P., York, Peter, Brown, M. January 2005 (has links)
No / The purpose of this study was to determine whether artificial neural network (ANN) programs implementing different backpropagation algorithms and default settings are capable of generating equivalent highly predictive models. Three ANN packages were used: INForm, CAD/Chem and MATLAB. Twenty variants of gradient descent, conjugate gradient, quasi-Newton and Bayesian regularisation algorithms were used to train networks containing a single hidden layer of 3¿12 nodes.
All INForm and CAD/Chem models trained satisfactorily for tensile strength, disintegration time and percentage dissolution at 15, 30, 45 and 60 min. Similarly, acceptable training was obtained for MATLAB models using Bayesian regularisation. Training of MATLAB models with other algorithms was erratic. This effect was attributed to a tendency for the MATLAB implementation of the algorithms to attenuate training in local minima of the error surface. Predictive models for tablet capping and friability could not be generated.
The most predictive models from each ANN package varied with respect to the optimum network architecture and training algorithm. No significant differences were found in the predictive ability of these models. It is concluded that comparable models are obtainable from different ANN programs provided that both the network architecture and training algorithm are optimised. A broad strategy for optimisation of the predictive ability of an ANN model is proposed.
|
3 |
Jednoduché rozpoznávání písma / Simple Character RecognitionDuba, Nikolas January 2011 (has links)
This thesis is focused on optical character recognition and its processing. The goal of this application is to make it possible easily track daily expenses. It can be used by an individual or by a company as a monitoring tool. The main principle is to make this tool most as user friendly as it can be. The application gets its input from hardware, such as a scanner or camera, and analyzes the content of the cash voucher for further processing. To analyze the voucher, the application employs different optical character recognition methods. The result is subsequently parsed. Detailed explanations of used methods are inside the document. The application output is a filled database with cash voucher details. Another part of the work is an information system with the main purpose of displaying the collected data.
|
4 |
Cognitive radio performance optimisation through spectrum availability predictionBarnes, Simon Daniel 27 June 2012 (has links)
The federal communications commission (FCC) has predicted that, under the current regulatory environment, a spectrum shortage may be faced in the near future. This impending spectrum shortage is in part due to a rapidly increasing demand for wireless services and in part due to inefficient usage of currently licensed bands. A new paradigm pertaining to wireless spectrum allocation, known as cognitive radio (CR), has been proposed as a potential solution to this problem. This dissertation seeks to contribute to research in the field of CR through an investigation into the effect that a primary user (PU) channel occupancy model will have on the performance of a secondary user (SU) in a CR network. The model assumes that PU channel occupancy can be described as a binary process and a two state Hidden Markov Model (HMM) was thus chosen for this investigation. Traditional algorithms for training the model were compared with certain evolutionary-based training algorithms in terms of their resulting prediction accuracy and computational complexity. The performance of this model is important since it provides SUs with a basis for channel switching and future channel allocations. A CR simulation platform was developed and the results gained illustrated the effect that the model had on channel switching and the subsequently achievable performance of a SU operating within a CR network. Performance with regard to achievable SU data throughput, PU disruption rate and SU power consumption, were examined for both theoretical test data as well as data obtained from real world spectrum measurements (taken in Pretoria, South Africa). The results show that a trade-off exists between the achievable SU throughput and the average PU disruption rate. Significant SU performance improvements were observed when prediction modelling was employed and it was found that the performance and complexity of the model were influenced by the algorithm employed to train it. SU performance was also affected by the length of the quick sensing interval employed. Results obtained from measured occupancy data were comparable with those obtained from theoretical occupancy data with an average percentage similarity score of 96% for prediction accuracy (using the Viterbi training algorithm), 90% for SU throughput, 83% for SU power consumption and 71% for PU disruption rate. / Dissertation (MEng)--University of Pretoria, 2012. / Electrical, Electronic and Computer Engineering / unrestricted
|
5 |
Neurale netwerke as moontlike woordafkappingstegniek vir AfrikaansFick, Machteld 09 1900 (has links)
Text in Afrikaans / Summaries in Afrikaans and English / In Afrikaans, soos in NederJands en Duits, word saamgestelde woorde aanmekaar geskryf. Nuwe
woorde word dus voortdurend geskep deur woorde aanmekaar te haak Dit bemoeilik die proses
van woordafkapping tydens teksprosessering, wat deesdae deur rekenaars gedoen word, aangesien
die verwysingsbron gedurig verander. Daar bestaan verskeie afkappingsalgoritmes en tegnieke, maar
die resultate is onbevredigend. Afrikaanse woorde met korrekte lettergreepverdeling is net die elektroniese
weergawe van die handwoordeboek van die Afrikaanse Taal (HAT) onttrek. 'n Neutrale
netwerk ( vorentoevoer-terugpropagering) is met sowat. 5 000 van hierdie woorde afgerig. Die neurale
netwerk is verfyn deur 'n gcskikte afrigtingsalgoritme en oorfragfunksie vir die probleem asook die
optimale aantal verborge lae en aantal neurone in elke laag te bepaal. Die neurale netwerk is met
5 000 nuwe woorde getoets en dit het 97,56% van moontlike posisies korrek as of geldige of ongeldige
afkappingsposisies geklassifiseer. Verder is 510 woorde uit tydskrifartikels met die neurale netwerk
getoets en 98,75% van moontlike posisies is korrek geklassifiseer. / In Afrikaans, like in Dutch and German, compound words are written as one word. New words are
therefore created by simply joining words. Word hyphenation during typesetting by computer is a
problem, because the source of reference changes all the time. Several algorithms and techniques
for hyphenation exist, but results are not satisfactory. Afrikaans words with correct syllabification
were extracted from the electronic version of the Handwoordeboek van die Afrikaans Taal (HAT).
A neural network (feedforward backpropagation) was trained with about 5 000 of these words. The
neural network was refined by heuristically finding a suitable training algorithm and transfer function
for the problem as well as determining the optimal number of layers and number of neurons in each
layer. The neural network was tested with 5 000 words not the training data. It classified 97,56% of
possible points in these words correctly as either valid or invalid hyphenation points. Furthermore,
510 words from articles in a magazine were tested with the neural network and 98,75% of possible
positions were classified correctly. / Computing / M.Sc. (Operasionele Navorsing)
|
6 |
Default reasoning and neural networksGovender, I. (Irene) 06 1900 (has links)
In this dissertation a formalisation of nonmonotonic reasoning, namely Default logic, is discussed. A proof theory for default logic and a variant of Default logic - Prioritised Default logic - is presented. We also pursue an investigation into the relationship between default reasoning and making inferences in a neural network. The inference problem shifts from the logical problem in Default logic to the optimisation problem in neural networks, in which maximum consistency is aimed at The inference is realised as an adaptation process that identifies and resolves conflicts between existing knowledge about the relevant world and external information. Knowledge and
data are transformed into constraint equations and the nodes in the network represent propositions and constraint equations. The violation of constraints is formulated in terms of an energy function. The Hopfield network is shown to be suitable for modelling optimisation problems and default reasoning. / Computer Science / M.Sc. (Computer Science)
|
7 |
Modelagem de um processo fermentativo por rede Perceptron multicamadas com atraso de tempo / not availableManesco, Luis Fernando 09 August 1996 (has links)
A utilização de Redes Neurais Artificias para fins de identificação e controle de sistemas dinâmicos têm recebido atenção especial de muitos pesquisadores, principalmente no que se refere a sistemas não lineares. Neste trabalho é apresentado um estudo sobre a utilização de um tipo em particular de Rede Neural Artificial, uma Perceptron Multicamadas com Atraso de Tempo, na estimação de estados da etapa fermentativa do processo de Reichstein para produção de vitamina C. A aplicação de Redes Neurais Artificiais a este processo pode ser justificada pela existência de problemas associados à esta etapa, como variáveis de estado não mensuráveis e com incertezas de medida e não linearidade do processo fermentativo, além da dificuldade em se obter um modelo convencional que contemple todas as fases do processo. É estudado também a eficácia do algoritmo de Levenberg-Marquadt, na aceleração do treinamento da Rede Neural Artificial, além de uma comparação do desempenho de estimação de estados das Redes Neurais Artificiais estudadas com o filtro estendido de Kalman, baseado em um modelo não estruturado do processo fermentativo. A análise do desempenho das Redes Neurais Artificiais estudadas é avaliada em termos de uma figura de mérito baseada no erro médio quadrático sendo feitas considerações quanto ao tipo da função de ativação e o número de unidades da camada oculta. Os dados utilizados para treinamento e avaliação da Redes Neurais Artificiais foram obtidos de um conjunto de ensaios interpolados para o intervalo de amostragem desejado. / ldentification and Control of dynamic systems using Artificial Neural Networks has been widely investigated by many researchers in the last few years, with special attention to the application of these in nonlinear systems. ls this works, a study on the utilization of a particular type of Artificial Neural Networks, a Time Delay Multi Layer Perceptron, in the state estimation of the fermentative phase of the Reichstein process of the C vitamin production. The use of Artificial Neural Networks can be justified by the presence of problems, such as uncertain and unmeasurable state variables and process non-linearity, and by the fact that a conventional model that works on all phases of the fermentative processes is very difficult to obtain. The efficiency of the Levenberg Marquadt algorithm on the acceleration of the training process is also studied. Also, a comparison is performed between the studied Artificial Neural Networks and an extended Kalman filter based on a non-structured model for this fermentative process. The analysis of lhe Artificial Neural Networks is carried out using lhe mean square errors taking into consideration lhe activation function and the number of units presents in the hidden layer. A set of batch experimental runs, interpolated to the desired time interval, is used for training and validating the Artificial Neural Networks.
|
8 |
Νέοι αλγόριθμοι εκπαίδευσης τεχνητών νευρωνικών δικτύων και εφαρμογές / New training algorithms for artificial neural networks and applicationsΚωστόπουλος, Αριστοτέλης 17 September 2012 (has links)
Η παρούσα διδακτορική διατριβή πραγματεύεται το θέμα της εκπαίδευσης εμπρόσθιων τροφοδοτούμενων τεχνητών νευρωνικών δικτύων και τις εφαρμογές τους. Η παρουσίαση των θεμάτων και των αποτελεσμάτων της διατριβής οργανώνεται ως εξής:
Στο Κεφάλαιο 1 παρουσιάζονται τα τεχνητά νευρωνικά δίκτυα , τα οφέλη της χρήσης τους, η δομή και η λειτουργία τους. Πιο συγκεκριμένα, παρουσιάζεται πως από τους βιολογικούς νευρώνες μοντελοποιούνται οι τεχνητοί νευρώνες, που αποτελούν το θεμελιώδες στοιχείο των τεχνητών νευρωνικών δικτύων. Στη συνέχεια αναφέρονται οι βασικές αρχιτεκτονικές των εμπρόσθιων τροφοδοτούμενων τεχνητών νευρωνικών δικτύων. Το κεφάλαιο ολοκληρώνεται με μια ιστορική αναδρομή για τα τεχνητά νευρωνικά δίκτυα και με την παρουσίαση κάποιων εφαρμογών τους.
Στο Κεφάλαιο 2 παρουσιάζονται μερικοί από τους υπάρχοντες αλγορίθμους εκπαίδευσης τεχνητών νευρωνικών δικτύων. Γίνεται μια περιληπτική αναφορά του προβλήματος της εκπαίδευσης των τεχνητών νευρωνικών δικτύων με επίβλεψη και δίνεται η μαθηματική μοντελοποίηση που αντιστοιχεί στην ελαχιστοποίηση του κόστους. Στην συνέχεια γίνεται μια περιληπτική αναφορά στις μεθόδους που βασίζονται στην κατεύθυνση της πιο απότομης καθόδου, στις μεθόδους δευτέρας τάξεως όπου απαιτείται ο υπολογισμός του Εσσιανού πίνακα της συνάρτησης κόστους, στις μεθόδους μεταβλητής μετρικής, και στις μεθόδους συζυγών κλίσεων. Κατόπιν, παρουσιάζεται ο χώρος των βαρών, η επιφάνεια σφάλματος και οι διάφορες τεχνικές αρχικοποίησης των βαρών των τεχνητών νευρωνικών δικτύων και περιγράφονται οι επιπτώσεις που έχουν στην εκπαίδευση τους.
Στο Κεφάλαιο 3 παρουσιάζεται ένας νέος αλγόριθμος εκπαίδευσης τεχνητών νευρωνικών δικτύων βασισμένος στον αλγόριθμο της οπισθοδιάδοσης του σφάλματος και στην αυτόματη προσαρμογή του ρυθμού εκπαίδευσης χρησιμοποιώντας πληροφορία δυο σημείων. Η κατεύθυνση αναζήτησης του νέου αλγορίθμου είναι η κατεύθυνση της πιο απότομης καθόδου, αλλά για τον προσδιορισμό του ρυθμού εκπαίδευσης χρησιμοποιούνται προσεγγίσεις δυο σημείων της εξίσωσης χορδής των μεθόδων ψεύδο-Newton. Επιπλέον, παράγεται ένας νέος ρυθμός εκπαίδευσης προσεγγίζοντας την νέα εξίσωση χορδής, που προτάθηκε από τον Zhang, η οποία χρησιμοποιεί πληροφορία παραγώγων και συναρτησιακών τιμών. Στη συνέχεια, ένας κατάλληλος μηχανισμός επιλογής του ρυθμού εκπαίδευσης ενσωματώνεται στον αλγόριθμο εκπαίδευσης ώστε να επιλέγεται κάθε φορά ο κατάλληλος ρυθμός εκπαίδευσης. Τέλος, γίνεται μελέτη της σύγκλισης του αλγορίθμου εκπαίδευσης και παρουσιάζονται τα πειραματικά αποτελέσματα για διάφορα προβλήματα εκπαίδευσης.
Στο Κεφάλαιο 4 παρουσιάζονται μερικοί αποτελεσματικοί αλγόριθμοι εκπαίδευσης οι οποίοι βασίζονται στις μεθόδους βελτιστοποίησης συζυγών κλίσεων. Στους υπάρχοντες αλγόριθμους εκπαίδευσης συζυγών κλίσεων προστίθεται ένας αλγόριθμος εκπαίδευσης που βασίζεται στη μέθοδο συζυγών κλίσεων του Perry. Επιπρόσθετα, προτείνονται νέοι αλγόριθμοι συζυγών κλίσεων που προκύπτουν από τις ίδιες αρχές που προέρχονται οι γνωστοί αλγόριθμοι συζυγών κλίσεων των Hestenes-Stiefel, Fletcher-Reeves, Polak-Ribiere και Perry, και ονομάζονται κλιμακωτοί αλγόριθμοι συζυγών κλίσεων. Αυτή η κατηγορία αλγορίθμων βασίζεται στην φασματική παράμετρο κλιμάκωσης του προτάθηκε από τους Barzilai και Borwein. Επιπλέον, ενσωματώνεται στους αλγόριθμους εκπαίδευσης συζυγών κλίσεων μια αποδοτική τεχνική γραμμικής αναζήτησης, που βασίζεται στις συνθήκες του Wolfe και στην διασφαλισμένη κυβική παρεμβολή. Ακόμη, η παράμετρος του αρχικού ρυθμού εκπαίδευσης προσαρμόζεται αυτόματα σε κάθε επανάληψη σύμφωνα με ένα κλειστό τύπο. Στη συνέχεια, εφαρμόζεται μια αποτελεσματική διαδικασία επανεκκίνησης, έτσι ώστε να βελτιωθούν περαιτέρω οι αλγόριθμοι εκπαίδευσης συζυγών κλίσεων και να αποδειχθεί η ολική τους σύγκλιση. Τέλος, παρουσιάζονται τα πειραματικά αποτελέσματα για διάφορα προβλήματα εκπαίδευσης.
Στο τελευταίο Κεφάλαιο της παρούσας διδακτορικής διατριβής, απομονώνεται και τροποποιείται ο κλιμακωτός αλγόριθμος του Perry, που παρουσιάστηκε στο προηγούμενο κεφάλαιο. Πιο συγκεκριμένα, ενώ διατηρούνται τα κύρια χαρακτηριστικά του αλγορίθμου εκπαίδευσης, εφαρμόζεται μια διαφορετική τεχνική γραμμικής αναζήτησης η οποία βασίζεται στις μη μονότονες συνθήκες του Wolfe. Επίσης προτείνεται ένας νέος αρχικός ρυθμός εκπαίδευσης για χρήση με τον κλιμακωτό αλγόριθμο εκπαίδευσης συζυγών κλίσεων, ο οποίος φαίνεται να είναι αποδοτικότερος από τον αρχικό ρυθμό εκπαίδευσης που προτάθηκε από τον Shanno όταν χρησιμοποιείται σε συνδυασμό με την μη μονότονη τεχνική γραμμικής αναζήτησης. Στη συνέχεια παρουσιάζονται τα πειραματικά αποτελέσματα για διάφορα προβλήματα εκπαίδευσης. Τέλος, ως εφαρμογή εκπαιδεύεται ένα πολυεπίπεδο εμπρόσθια τροφοδοτούμενο τεχνητό νευρωνικό δίκτυο με τον προτεινόμενο αλγόριθμο για το πρόβλημα της ταξινόμησης καρκινικών κυττάρων του εγκεφάλου και συγκρίνεται η απόδοση του με την απόδοση ενός πιθανοτικού τεχνητού νευρωνικού δικτύου.
Η διατριβή ολοκληρώνεται με το Παράρτημα Α’, όπου παρουσιάζονται τα προβλήματα εκπαίδευσης τεχνητών νευρωνικών δικτύων που χρησιμοποιήθηκαν για την αξιολόγηση των προτεινόμενων αλγορίθμων εκπαίδευσης. / In this dissertation the problem of the training of feedforward artificial neural networks and its applications are considered. The presentation of the topics and the results are organized as follows:
In the first chapter, the artificial neural networks are introduced. Initially, the benefits of the use of artificial neural networks are presented. In the sequence, the structure and their functionality are presented. More specifically, the derivation of the artificial neurons from the biological ones is presented followed by the presentation of the architecture of the feedforward neural networks. The historical notes and the use of neural networks in real world problems are concluding the first chapter.
In Chapter 2, the existing training algorithms for the feedforward neural networks are considered. First, a summary of the training problem and its mathematical formulation, that corresponds to the uncostrained minimization of a cost function, are given. In the sequence, training algorithms based on the steepest descent, Newton, variable metric and conjugate gradient methods are presented. Furthermore, the weight space, the error surface and the techniques of the initialization of the weights are described. Their influence in the training procedure is discussed.
In Chapter 3, a new training algorithm for feedforward neural networks based on the backpropagation algorithm and the automatic two-point step size (learning rate) is presented. The algorithm uses the steepest descent search direction while the learning rate parameter is calculated by minimizing the standard secant equation. Furthermore, a new learning rate parameter is derived by minimizing the modified secant equation introduced by Zhang, that uses both gradient and function value information. In the sequece a switching mechanism is incorporated into the algorithm so that the appropriate stepsize to be chosen according to the status of the current iterative point. Finaly, the global convergence of the proposed algorithm is studied and the results of some numerical experiments are presented.
In Chapter 4, some efficient training algorithms, based on conjugate gradient optimization methods, are presented. In addition to the existing conjugate gradient training algorithms, we introduce Perry's conjugate gradient method as a training algorithm. Furthermore, a new class of conjugate gradient methods is proposed, called self-scaled conjugate gradient methods, which are derived from the principles of Hestenes-Stiefel, Fletcher-Reeves, Polak-Ribiere and Perry's method. This class is based on the spectral scaling parameter. Furthermore, we incorporate to the conjugate gradient training algorithms an efficient line search technique based on the Wolfe conditions and on safeguarded cubic interpolation. In addition, the initial learning rate parameter, fed to the line search technique, was automatically adapted at each iteration by a closed formula. Finally, an efficient restarting procedure was employed in order to further improve the effectiveness of the conjugate gradient training algorithms and prove their global convergence. Experimental results show that, in general, the new class of methods can perform better with a much lower computational cost and better success performance.
In the last chapter of this dissertation, the Perry's self-scaled conjugate gradient training algorithm that was presented in the previous chapter was isolated and modified. More specifically, the main characteristics of the training algorithm were maintained but in this case a different line search strategy based on the nonmonotone Wolfe conditions was utilized. Furthermore, a new initial learning rate parameter was introduced for use in conjunction with the self-scaled conjugate gradient training algorithm that seems to be more effective from the initial learning rate parameter, proposed by Shanno, when used with the nonmonotone line search technique. In the sequence the experimental results for differrent training problems are presented. Finally, a feedforward neural network with the proposed algorithm for the problem of brain astrocytomas grading was trained and compared the results with those achieved by a probabilistic neural network.
The dissertation is concluded with the Appendix A', where the training problems used for the evaluation of the proposed training algorithms are presented.
|
9 |
Neurale netwerke as moontlike woordafkappingstegniek vir AfrikaansFick, Machteld 09 1900 (has links)
Text in Afrikaans / Summaries in Afrikaans and English / In Afrikaans, soos in NederJands en Duits, word saamgestelde woorde aanmekaar geskryf. Nuwe
woorde word dus voortdurend geskep deur woorde aanmekaar te haak Dit bemoeilik die proses
van woordafkapping tydens teksprosessering, wat deesdae deur rekenaars gedoen word, aangesien
die verwysingsbron gedurig verander. Daar bestaan verskeie afkappingsalgoritmes en tegnieke, maar
die resultate is onbevredigend. Afrikaanse woorde met korrekte lettergreepverdeling is net die elektroniese
weergawe van die handwoordeboek van die Afrikaanse Taal (HAT) onttrek. 'n Neutrale
netwerk ( vorentoevoer-terugpropagering) is met sowat. 5 000 van hierdie woorde afgerig. Die neurale
netwerk is verfyn deur 'n gcskikte afrigtingsalgoritme en oorfragfunksie vir die probleem asook die
optimale aantal verborge lae en aantal neurone in elke laag te bepaal. Die neurale netwerk is met
5 000 nuwe woorde getoets en dit het 97,56% van moontlike posisies korrek as of geldige of ongeldige
afkappingsposisies geklassifiseer. Verder is 510 woorde uit tydskrifartikels met die neurale netwerk
getoets en 98,75% van moontlike posisies is korrek geklassifiseer. / In Afrikaans, like in Dutch and German, compound words are written as one word. New words are
therefore created by simply joining words. Word hyphenation during typesetting by computer is a
problem, because the source of reference changes all the time. Several algorithms and techniques
for hyphenation exist, but results are not satisfactory. Afrikaans words with correct syllabification
were extracted from the electronic version of the Handwoordeboek van die Afrikaans Taal (HAT).
A neural network (feedforward backpropagation) was trained with about 5 000 of these words. The
neural network was refined by heuristically finding a suitable training algorithm and transfer function
for the problem as well as determining the optimal number of layers and number of neurons in each
layer. The neural network was tested with 5 000 words not the training data. It classified 97,56% of
possible points in these words correctly as either valid or invalid hyphenation points. Furthermore,
510 words from articles in a magazine were tested with the neural network and 98,75% of possible
positions were classified correctly. / Computing / M.Sc. (Operasionele Navorsing)
|
10 |
Default reasoning and neural networksGovender, I. (Irene) 06 1900 (has links)
In this dissertation a formalisation of nonmonotonic reasoning, namely Default logic, is discussed. A proof theory for default logic and a variant of Default logic - Prioritised Default logic - is presented. We also pursue an investigation into the relationship between default reasoning and making inferences in a neural network. The inference problem shifts from the logical problem in Default logic to the optimisation problem in neural networks, in which maximum consistency is aimed at The inference is realised as an adaptation process that identifies and resolves conflicts between existing knowledge about the relevant world and external information. Knowledge and
data are transformed into constraint equations and the nodes in the network represent propositions and constraint equations. The violation of constraints is formulated in terms of an energy function. The Hopfield network is shown to be suitable for modelling optimisation problems and default reasoning. / Computer Science / M.Sc. (Computer Science)
|
Page generated in 0.0761 seconds