• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2346
  • 1211
  • 184
  • 151
  • 119
  • 59
  • 26
  • 24
  • 23
  • 22
  • 21
  • 19
  • 18
  • 18
  • 17
  • Tagged with
  • 5206
  • 5206
  • 1706
  • 1691
  • 1405
  • 1245
  • 1192
  • 1095
  • 944
  • 679
  • 676
  • 625
  • 580
  • 500
  • 485
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Computational Complexity of Hopfield Networks

Tseng, Hung-Li 08 1900 (has links)
There are three main results in this dissertation. They are PLS-completeness of discrete Hopfield network convergence with eight different restrictions, (degree 3, bipartite and degree 3, 8-neighbor mesh, dual of the knight's graph, hypercube, butterfly, cube-connected cycles and shuffle-exchange), exponential convergence behavior of discrete Hopfield network, and simulation of Turing machines by discrete Hopfield Network.
72

Learning in large-scale spiking neural networks

Bekolay, Trevor January 2011 (has links)
Learning is central to the exploration of intelligence. Psychology and machine learning provide high-level explanations of how rational agents learn. Neuroscience provides low-level descriptions of how the brain changes as a result of learning. This thesis attempts to bridge the gap between these two levels of description by solving problems using machine learning ideas, implemented in biologically plausible spiking neural networks with experimentally supported learning rules. We present three novel neural models that contribute to the understanding of how the brain might solve the three main problems posed by machine learning: supervised learning, in which the rational agent has a fine-grained feedback signal, reinforcement learning, in which the agent gets sparse feedback, and unsupervised learning, in which the agents has no explicit environmental feedback. In supervised learning, we argue that previous models of supervised learning in spiking neural networks solve a problem that is less general than the supervised learning problem posed by machine learning. We use an existing learning rule to solve the general supervised learning problem with a spiking neural network. We show that the learning rule can be mapped onto the well-known backpropagation rule used in artificial neural networks. In reinforcement learning, we augment an existing model of the basal ganglia to implement a simple actor-critic model that has a direct mapping to brain areas. The model is used to recreate behavioural and neural results from an experimental study of rats performing a simple reinforcement learning task. In unsupervised learning, we show that the BCM rule, a common learning rule used in unsupervised learning with rate-based neurons, can be adapted to a spiking neural network. We recreate the effects of STDP, a learning rule with strict time dependencies, using BCM, which does not explicitly remember the times of previous spikes. The simulations suggest that BCM is a more general rule than STDP. Finally, we propose a novel learning rule that can be used in all three of these simulations. The existence of such a rule suggests that the three types of learning examined separately in machine learning may not be implemented with separate processes in the brain.
73

Learning in large-scale spiking neural networks

Bekolay, Trevor January 2011 (has links)
Learning is central to the exploration of intelligence. Psychology and machine learning provide high-level explanations of how rational agents learn. Neuroscience provides low-level descriptions of how the brain changes as a result of learning. This thesis attempts to bridge the gap between these two levels of description by solving problems using machine learning ideas, implemented in biologically plausible spiking neural networks with experimentally supported learning rules. We present three novel neural models that contribute to the understanding of how the brain might solve the three main problems posed by machine learning: supervised learning, in which the rational agent has a fine-grained feedback signal, reinforcement learning, in which the agent gets sparse feedback, and unsupervised learning, in which the agents has no explicit environmental feedback. In supervised learning, we argue that previous models of supervised learning in spiking neural networks solve a problem that is less general than the supervised learning problem posed by machine learning. We use an existing learning rule to solve the general supervised learning problem with a spiking neural network. We show that the learning rule can be mapped onto the well-known backpropagation rule used in artificial neural networks. In reinforcement learning, we augment an existing model of the basal ganglia to implement a simple actor-critic model that has a direct mapping to brain areas. The model is used to recreate behavioural and neural results from an experimental study of rats performing a simple reinforcement learning task. In unsupervised learning, we show that the BCM rule, a common learning rule used in unsupervised learning with rate-based neurons, can be adapted to a spiking neural network. We recreate the effects of STDP, a learning rule with strict time dependencies, using BCM, which does not explicitly remember the times of previous spikes. The simulations suggest that BCM is a more general rule than STDP. Finally, we propose a novel learning rule that can be used in all three of these simulations. The existence of such a rule suggests that the three types of learning examined separately in machine learning may not be implemented with separate processes in the brain.
74

Analysis of electrocardiograms using artificial neural networks

Hedén, Bo. January 1997 (has links)
Thesis (doctoral)--Lund University, 1997. / Added t.p. with thesis statement inserted.
75

Analysis of electrocardiograms using artificial neural networks

Hedén, Bo. January 1997 (has links)
Thesis (doctoral)--Lund University, 1997. / Added t.p. with thesis statement inserted.
76

COMPARISON OF PRE-TRAINED CONVOLUTIONAL NEURAL NETWORK PERFORMANCE ON GLIOMA CLASSIFICATION

Unknown Date (has links)
Gliomas are an aggressive class of brain tumors that are associated with a better prognosis at a lower grade level. Effective differentiation and classification are imperative for early treatment. MRI scans are a popular medical imaging modality to detect and diagnosis brain tumors due to its capability to non-invasively highlight the tumor region. With the rise of deep learning, researchers have used convolution neural networks for classification purposes in this domain, specifically pre-trained networks to reduce computational costs. However, with various MRI modalities, MRI machines, and poor image scan quality cause different network structures to have different performance metrics. Each pre-trained network is designed with a different structure that allows robust results given specific problem conditions. This thesis aims to cover the gap in the literature to compare the performance of popular pre-trained networks on a controlled dataset that is different than the network trained domain. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2020. / FAU Electronic Theses and Dissertations Collection
77

Symbol Grounding Using Neural Networks

Horvitz, Richard P. 05 October 2012 (has links)
No description available.
78

The evolutionary consequences of redundancy in natural and artificial genetic codes

Barreau, Guillaume January 1998 (has links)
No description available.
79

A connectionist approach in music perception

Carpinteiro, Otavio Augusto Salgado January 1996 (has links)
Little research has been carried out in order to understand the mechanisms underlying the perception of polyphonic music. Perception of polyphonic music involves thematic recognition, that is, recognition of instances of theme through polyphonic voices, whether they appear unaccompanied, transposed, altered or not. There are many questions still open to debate concerning thematic recognition in the polyphonic domain. One of them, in particular, is the question of whether or not cognitive mechanisms of segmentation and thematic reinforcement facilitate thematic recognition in polyphonic music. This dissertation proposes a connectionist model to investigate the role of segmentation and thematic reinforcement in thematic recognition in polyphonic music. The model comprises two stages. The first stage consists of a supervised artificial neural model to segment musical pieces in accordance with three cases of rhythmic segmentation. The supervised model is trained and tested on sets of contrived patterns, and successfully applied to six musical pieces from J. S. Bach. The second stage consists of an original unsupervised artificial neural model to perform thematic recognition. The unsupervised model is trained and assessed on a four-part fugue from J. S. Bach. The research carried out in this dissertation contributes into two distinct fields. Firstly, it contributes to the field of artificial neural networks. The original unsupervised model encodes and manipulates context information effectively, and that enables it to perform sequence classification and discrimination efficiently. It has application in cognitive domains which demand classifying either a set of sequences of vectors in time or sub-sequences within a unique and large sequence of vectors in time. Secondly, the research contributes to the field of music perception. The results obtained by the connectionist model suggest, along with other important conclusions, that thematic recognition in polyphony is not facilitated by segmentation, but otherwise, facilitated by thematic reinforcement.
80

Transformation-invariant topology preserving maps

McGlinchey, Stephen John January 2000 (has links)
No description available.

Page generated in 0.0958 seconds