• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2589
  • 1219
  • 191
  • 179
  • 120
  • 59
  • 35
  • 26
  • 25
  • 24
  • 23
  • 21
  • 20
  • 19
  • 18
  • Tagged with
  • 5590
  • 5590
  • 1963
  • 1718
  • 1424
  • 1339
  • 1252
  • 1198
  • 951
  • 731
  • 667
  • 636
  • 608
  • 513
  • 492
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Recurrent neural networks in the chemical process industries

Lourens, Cecil Albert 04 September 2012 (has links)
M.Ing. / This dissertation discusses the results of a literature survey into the theoretical aspects and development of recurrent neural networks. In particular, the various architectures and training algorithms developed for recurrent networks are discussed. The various characteristics of importance for the efficient implementation of recurrent neural networks to model dynamical nonlinear processes have also been investigated and are discussed. Process control has been identified as a field of application where recurrent networks may play an important role. The model based adaptive control strategy is briefly introduced and the application of recurrent networks to both the direct- and the indirect adaptive control strategy highlighted. In conclusion, the important areas of future research for the successful implementation of recurrent networks in adaptive nonlinear control are identified
92

Formation of the complex neural networks under multiple constraints

Chen, Yuhan 01 January 2013 (has links)
No description available.
93

A model of adaptive invariance

Wood, Jeffrey James January 1995 (has links)
This thesis is about adaptive invariance, and a new model of it: the Group Representation Network. We begin by discussing the concept of adaptive invariance. We then present standard background material, mostly from the fields of group theory and neural networks. Following this we introduce the problem of invariant pattern recognition and describe a number of methods for solving various instances of it. Next, we define the Symmetry Network, a connectionist model of permutation invariance, and we develop some new theory of this model. We also extend the applicability of the Symmetry Network to arbitrary finite group actions. We then introduce the Group Representation Network (GRN) as an abstract model, with which in principle we can construct concomitants between arbitrary group representations. We show that the GRN can be regarded as a neural network model, and that it includes the Symmetry Network as a submodel. We apply group representation theory to the analysis of GRNs. This yields general characterizations of the allowable activation functions in a GRN and of their weight matrix structure. We examine various generalizations and restricted cases of the GRN model, and in particular look at the construction of GRNs over infinite groups. We then consider the issue of a GRN's discriminability, which relates to the problem of graph isomorphism. We look next at the computational abilities of the GRN, and postulate that it is capable of approximately computing any group concomitant. We show constructively that any polynomial concomitant can be computed by a GRN. We also prove that a variety of standard models for invariant pattern recognition can be viewed as special instances of the GRN model. Finally, we propose that the GRN model may be biologically plausible and give suggestions for further research.
94

The combination of AI modelling techniques for the simulation of manufacturing processes

Korn, Stefan January 1998 (has links)
No description available.
95

COLANDER: Convolving Layer Network Derivation for E-recommendations

Timokhin, Dmitriy 01 June 2021 (has links) (PDF)
Many consumer facing companies have large scale data sets that they use to create recommendations for their users. These recommendations are usually based off information the company has on the user and on the item in question. Based on these two sets of features, models are created and tuned to produce the best possible recommendations. A third set of data that exists in most cases is the presence of past interactions a user may have had with other items. The relationships that a model can identify between this information and the other two types of data, we believe, can improve the prediction of how a user may interact with the given item. We propose a method that can inform the model of these relationships during the training phase while only relying on the user and item data during the prediction phase. Using ideas from convolutional neural networks (CNN) and collaborative filtering approaches, our method manipulated the weights in the first layer of our network design in a way that achieves this goal.
96

A neural network approach to burst detection

Mounce, Steve R., Day, Andrew J., Wood, Alastair S., Khan, Asar, Widdop, Peter D., Machell, James January 2002 (has links)
No
97

The implementation of generalised models of magnetic materials using artificial neural networks

Saliah-Hassane, Hamadou 09 1900 (has links)
No description available.
98

On the trainability, stability, representability, and realizability of artificial neural networks

Wang, Jun January 1991 (has links)
No description available.
99

Computational Complexity of Hopfield Networks

Tseng, Hung-Li 08 1900 (has links)
There are three main results in this dissertation. They are PLS-completeness of discrete Hopfield network convergence with eight different restrictions, (degree 3, bipartite and degree 3, 8-neighbor mesh, dual of the knight's graph, hypercube, butterfly, cube-connected cycles and shuffle-exchange), exponential convergence behavior of discrete Hopfield network, and simulation of Turing machines by discrete Hopfield Network.
100

Learning in large-scale spiking neural networks

Bekolay, Trevor January 2011 (has links)
Learning is central to the exploration of intelligence. Psychology and machine learning provide high-level explanations of how rational agents learn. Neuroscience provides low-level descriptions of how the brain changes as a result of learning. This thesis attempts to bridge the gap between these two levels of description by solving problems using machine learning ideas, implemented in biologically plausible spiking neural networks with experimentally supported learning rules. We present three novel neural models that contribute to the understanding of how the brain might solve the three main problems posed by machine learning: supervised learning, in which the rational agent has a fine-grained feedback signal, reinforcement learning, in which the agent gets sparse feedback, and unsupervised learning, in which the agents has no explicit environmental feedback. In supervised learning, we argue that previous models of supervised learning in spiking neural networks solve a problem that is less general than the supervised learning problem posed by machine learning. We use an existing learning rule to solve the general supervised learning problem with a spiking neural network. We show that the learning rule can be mapped onto the well-known backpropagation rule used in artificial neural networks. In reinforcement learning, we augment an existing model of the basal ganglia to implement a simple actor-critic model that has a direct mapping to brain areas. The model is used to recreate behavioural and neural results from an experimental study of rats performing a simple reinforcement learning task. In unsupervised learning, we show that the BCM rule, a common learning rule used in unsupervised learning with rate-based neurons, can be adapted to a spiking neural network. We recreate the effects of STDP, a learning rule with strict time dependencies, using BCM, which does not explicitly remember the times of previous spikes. The simulations suggest that BCM is a more general rule than STDP. Finally, we propose a novel learning rule that can be used in all three of these simulations. The existence of such a rule suggests that the three types of learning examined separately in machine learning may not be implemented with separate processes in the brain.

Page generated in 0.0449 seconds