• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5137
  • 1981
  • 420
  • 367
  • 312
  • 100
  • 73
  • 68
  • 66
  • 63
  • 56
  • 50
  • 44
  • 43
  • 39
  • Tagged with
  • 10698
  • 5795
  • 2836
  • 2720
  • 2637
  • 2389
  • 1656
  • 1614
  • 1545
  • 1523
  • 1336
  • 1114
  • 1030
  • 930
  • 898
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

The implementation of generalised models of magnetic materials using artificial neural networks

Saliah-Hassane, Hamadou 09 1900 (has links)
No description available.
202

Spiking Neural P Systems Simulation and Verification

Lefticaru, Raluca, Gheorghe, Marian, Konur, Savas, Niculescu, I.M., Adorna, H.N. 08 December 2021 (has links)
Yes / Spiking Neural (SN) P systems is a particular class of P systems that abstracts and applies ideas from neurobiology. Various aspects, representations and features have been studied extensively, but the tool support for modelling and analysing such systems is relatively limited. In this paper, we present a methodology that maps some classes of SN P systems to the equivalent kernel P system representations, which allows analysing SN P system dynamics using the kPWORKBENCH tool. We illustrate the applicability of our approach in some case studies, including an example system from synthetic biology.
203

On the trainability, stability, representability, and realizability of artificial neural networks

Wang, Jun January 1991 (has links)
No description available.
204

Learning and recognizing patterns of visual motion, color and form

Cunningham, Robert Kevin January 1998 (has links)
Animal vision systems make use of information about an object's motion, color, and form to detect and identify predators, prey and mates. Neurobiological evidence from the macaque monkey indicates that visual processing is separated into two streams: the magnocellular primarily for motion, and the parvocellular primarily for color and form. Two computational systems are developed using key functional properties of the two postulated physiological streams. Each produces invariant representations that act as input to separate copies of a new learning and recognition architecture, Gaussian ARTMAP with covariance terms (GAC). Finally, perceptual experiments are conducted to explore the ability of the human form/color system to detect and recognize targets in photo-realistic imagery. GAC, the component common to both computational systems, retains the on-line learning capabilities of previous ARTMAP architectures, but uses categories that have a location and orientation in the dimensions of the feature space. This architecture is shown to have lower error rates than Fuzzy ARTMAP and Gaussian ARTMAP for all data sets examined, and is used to cluster motion and spectral parameters. For the motion system, local velocity measures of image features are obtained by the method of Convected Activation Profiles. This method is extended and shown to accurately estimate the velocity normal to rotating and translating lines, or of line ends, points, and curves. These local measures are grouped into neighborhoods, and the collection of motions within a neighborhood is described using orientation-invariant deformation parameters. Multiple parameters obtained by examining maneuvering objects are clus­tered, and motions that are characteristic of specific objects are identified. For the form and color system, multi-spectral measurements are made invariant to some fluctuations of local luminance and atmospheric transmissivity by within-band and across-band shunting networks. The resulting color-processed spectral patterns are clustered to enhance the performance of a machine target detection algorithm. Psychophysicists have examined human target detection capabilities primarily via scenes of polygonal targets and distractors on uniform backgrounds. Techniques are developed and experiments are performed to assess human performance of visual search for a complex object in a cluttered scene.
205

On functional dimension of univariate ReLU neural networks:

Liang, Zhen January 2024 (has links)
Thesis advisor: Elisenda Grigsby / The space of parameter vectors for a feedforward ReLU neural networks with any fixed architecture is a high dimensional Euclidean space being used to represent the associated class of functions. However, there exist well-known global symmetries and extra poorly-understood hidden symmetries which do not change the neural network function computed by network with different parameter settings. This makes the true dimension of the space of function to be less than the number of parameters. In this thesis, we are interested in the structure of hidden symmetries for neural networks with various parameter settings, and particular neural networks with architecture \((1,n,1)\). For this class of architectures, we fully categorize the insufficiency of local functional dimension coming from activation patterns and give a complete list of combinatorial criteria guaranteeing a parameter setting admits no hidden symmetries coming from slopes of piecewise linear functions in the parameter space. Furthermore, we compute the probability that these hidden symmetries arise, which is rather small compared to the difference between functional dimension and number of parameters. This suggests the existence of other hidden symmetries. We investigate two mechanisms to explain this phenomenon better. Moreover, we motivate and define the notion of \(\varepsilon\)-effective activation regions and \(\varepsilon\)-effective functional dimension. We also experimentally estimate the difference between \(\varepsilon\)-effective functional dimension and true functional dimension for various parameter settings and different \(\varepsilon\). / Thesis (PhD) — Boston College, 2024. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Mathematics.
206

The role of interpretable neural architectures: from image classification to neural fields

Sambugaro, Zeno 07 1900 (has links)
Neural networks have demonstrated outstanding capabilities, surpassing human expertise across diverse tasks. Despite these advances, their widespread adoption is hindered by the complexity of interpreting their decision-making processes. This lack of transparency raises concerns in critical areas such as autonomous mobility, digital security, and healthcare. This thesis addresses the critical need for more interpretable and efficient neural-based technologies, aiming to enhance their transparency and lower their memory footprint. In the first part of this thesis we introduce Agglomerator and Agglomerator++, two frameworks that embody the principles of hierarchical representation to improve the understanding and interpretability of neural networks. These models aim to bridge the cognitive gap between human visual perception and computational models, effectively enhancing the capability of neural networks to dynamically represent complex data. The second part of the manuscript focuses on addressing the lack of spatial coherency and thereby efficiency of the latest fast-training neural field representations. To address this limitation we propose Lagrangian Hashing, a novel method that combines the efficiency of Eulerian grid-based representations with the spatial flexibility of Lagrangian point-based systems. This method extends the foundational work of hierarchical hashing, allowing for an adaptive allocation of the representation budget. In this way we effectively preserve the coherence of the neural structure with respect to the reconstructed 3D space. Within the context of 3D reconstruction we also conduct a comparative evaluation of the NeRF based reconstruction methodologies against traditional photogrammetry, to assess their usability in practical, real-world settings.
207

A neural model of head direction calibration during spatial navigation: learned integration of visual, vestibular, and motor cues

Fortenberry, Bret January 2012 (has links)
Thesis (Ph.D.)--Boston University, 2012 / PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you. / Effective navigation depends upon reliable estimates of head direction (HD). Visual, vestibular, and outflow motor signals combine for this purpose in a brain system that includes dorsal tegmental nucleus, lateral mammillary nuclei (LMN), anterior dorsal thalamic nucleus (ADN), and the postsubiculum (PoS). Learning is needed to combine such different cues and to provide reliable estimates of HD. A neural model is developed to explain how these three types of signals combine adaptively within the above brain regions to generate a consistent and reliable HD estimate, in both light and darkness. The model starts with establishing HD cells so that each cell is tuned to a preferred head direction, wherein the firing rate is maximal at the preferred direction and decreases as the head turns from the preferred direction. In the brain, HD cells fire in anticipation of a head rotation. This anticipation is measured by the anticipated time interval (ATI), which is greater in early processing stages of the HD system than at later stages. The ATI is greatest in the LMN at -70 ms, it is reduced in the ADN to -25 ms, and non-existing in the last HD stage, the PoS. In the model, these HD estimates are controlled at the corresponding processing stages by combinations of vestibular and motor signals as they become adaptively calibrated to produce a correct HD estimate. The model also simulates how visual cues anchor HD estimates through adaptive learning when the cue is in the animal's field of view. Such learning gains control over cell firing within minutes. As in the data, distal visual cues are more effective than proximal cues for anchoring the preferred direction. The introduction of novel cues in either a novel or familiar environment is learned and gains control over a cell's preferred direction within minutes. Turning out the lights or removing all familiar cues does not change the cells firing activity, but it may accumulate a drift in the cell's preferred direction. / 2999-01-01
208

Face Recognition with Preprocessing and Neural Networks

Habrman, David January 2016 (has links)
Face recognition is the problem of identifying individuals in images. This thesis evaluates two methods used to determine if pairs of face images belong to the same individual or not. The first method is a combination of principal component analysis and a neural network and the second method is based on state-of-the-art convolutional neural networks. They are trained and evaluated using two different data sets. The first set contains many images with large variations in, for example, illumination and facial expression. The second consists of fewer images with small variations. Principal component analysis allowed the use of smaller networks. The largest network has 1.7 million parameters compared to the 7 million used in the convolutional network. The use of smaller networks lowered the training time and evaluation time significantly. Principal component analysis proved to be well suited for the data set with small variations outperforming the convolutional network which need larger data sets to avoid overfitting. The reduction in data dimensionality, however, led to difficulties classifying the data set with large variations. The generous amount of images in this set allowed the convolutional method to reach higher accuracies than the principal component method.
209

Computational Complexity of Hopfield Networks

Tseng, Hung-Li 08 1900 (has links)
There are three main results in this dissertation. They are PLS-completeness of discrete Hopfield network convergence with eight different restrictions, (degree 3, bipartite and degree 3, 8-neighbor mesh, dual of the knight's graph, hypercube, butterfly, cube-connected cycles and shuffle-exchange), exponential convergence behavior of discrete Hopfield network, and simulation of Turing machines by discrete Hopfield Network.
210

TWIST1 : a subtle modulator of neural differentiation and neural tube formation

Nistor, Paul Andrei January 2013 (has links)
The central nervous system is formed from epiblast precursor cells through Neurulation. Neural induction can be studied in its main aspects in vitro. However, the process is poorly understood, especially in regard to when and how a cell becomes specified, and then committed, to be a neural cell. It is, on the other hand, well established that neural formation requires absence or, inhibition of the BMP signalling both in vivo and in vitro. ID1 is a direct target of BMP signalling with major influence on in vitro neural differentiation. A cDNA library screen, looking for transcription factors negatively regulated by ID1, reported TWIST1, along with only two other proteins. Twist1 expression is upregulated during in vitro neural differentiation. Furthermore, targeted deletion of Twist1 has dramatic consequences on anterior neural development. Twist1 knock-out mice fail to form the closed neural tube in the prospective brain, followed by exencephaly and, early embryonic death. In this thesis I investigate the influence on in vitro neural differentiation of a TWIST1 constitutively active form, insensitive to ID1 inhibition. I report that this transcriptionally active TWIST1 accelerates neural differentiation, in vitro and, biases it, towards dorsal phenotypes. I provide, for the first time, evidence for Twist1 expression in the neural tissue, observed weakly in a restricted domain, temporally and spatially, in the dorsal part of the neural tube. I propose a new model for TWIST1 influence at this level. I also investigate how TWIST1 actions depend on levels of expression and dimer choice. I found that, TWIST1 can exert its neural modulating actions only at low levels, as high levels divert a cell fate towards non-neural lineages.

Page generated in 0.0492 seconds