• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 5
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Computational Complexity of Hopfield Networks

Tseng, Hung-Li 08 1900 (has links)
There are three main results in this dissertation. They are PLS-completeness of discrete Hopfield network convergence with eight different restrictions, (degree 3, bipartite and degree 3, 8-neighbor mesh, dual of the knight's graph, hypercube, butterfly, cube-connected cycles and shuffle-exchange), exponential convergence behavior of discrete Hopfield network, and simulation of Turing machines by discrete Hopfield Network.
2

A NEURAL METHOD OF COMPUTING OPTICAL FLOW BASED ON GEOMETRIC CONSTRAINTS

KAIMAL, VINOD GOPALKRISHNA January 2002 (has links)
No description available.
3

An Enhanced Learning for Restricted Hopfield Networks

Halabian, Faezeh 10 June 2021 (has links)
This research investigates developing a training method for Restricted Hopfield Network (RHN) which is a subcategory of Hopfield Networks. Hopfield Networks are recurrent neural networks proposed in 1982 by John Hopfield. They are useful for different applications such as pattern restoration, pattern completion/generalization, and pattern association. In this study, we propose an enhanced training method for RHN which not only improves the convergence of the training sub-routine, but also is shown to enhance the learning capability of the network. Particularly, after describing the architecture/components of the model, we propose a modified variant of SPSA which in conjunction with back-propagation over time result in a training algorithm with an enhanced convergence for RHN. The trained network is also shown to achieve a better memory recall in the presence of noisy/distorted input. We perform several experiments, using various datasets, to verify the convergence of the training sub-routine, evaluate the impact of different parameters of the model, and compare the performance of the trained RHN in recreating distorted input patterns compared to conventional RBM and Hopfield network and other training methods.
4

Advances in electrical capacitance tomography

Marashdeh, Qussai Mohammad 07 August 2006 (has links)
No description available.
5

Renormalization group theory, scaling laws and deep learning

Haggi Mani, Parviz 08 1900 (has links)
The question of the possibility of intelligent machines is fundamentally intertwined with the machines’ ability to reason. Or not. The developments of the recent years point in a completely different direction : What we need is simple, generic but scalable algorithms that can keep learning on their own. This thesis is an attempt to find theoretical explanations to the findings of recent years where empirical evidence has been presented in support of phase transitions in neural networks, power law behavior of various entities, and even evidence of algorithmic universality, all of which are beautifully explained in the context of statistical physics, quantum field theory and statistical field theory but not necessarily in the context of deep learning where no complete theoretical framework is available. Inspired by these developments, and as it turns out, with the overly ambitious goal of providing a solid theoretical explanation of the empirically observed power laws in neu- ral networks, we set out to substantiate the claims that renormalization group theory may be the sought-after theory of deep learning which may explain the above, as well as what we call algorithmic universality. / La question de la possibilité de machines intelligentes est intimement liée à la capacité de ces machines à raisonner. Ou pas. Les développements des dernières années indiquent une direction complètement différente : ce dont nous avons besoin sont des algorithmes simples, génériques mais évolutifs qui peuvent continuer à apprendre de leur propre chef. Cette thèse est une tentative de trouver des explications théoriques aux constatations des dernières années où des preuves empiriques ont été présentées en faveur de transitions de phase dans les réseaux de neurones, du comportement en loi de puissance de diverses entités, et même de l'universialité algorithmique, tout cela étant parfaitement expliqué dans le contexte de la physique statistique, de la théorie quantique des champs et de la théorie statistique des champs, mais pas nécessairement dans le contexte de l'apprentissage profond où aucun cadre théorique complet n'est disponible. Inspiré par ces développements, et comme il s'avère, avec le but ambitieux de fournir une explication théorique solide des lois de puissance empiriquement observées dans les réseaux de neurones, nous avons entrepris de étayer les affirmations selon lesquelles la théorie du groupe de renormalisation pourrait être la théorie recherchée de l'apprentissage profond qui pourrait expliquer cela, ainsi que ce que nous appelons l'universialité algorithmique.

Page generated in 0.0519 seconds