• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5141
  • 1981
  • 420
  • 367
  • 312
  • 100
  • 73
  • 68
  • 66
  • 63
  • 56
  • 50
  • 44
  • 43
  • 39
  • Tagged with
  • 10708
  • 5801
  • 2837
  • 2720
  • 2640
  • 2395
  • 1659
  • 1614
  • 1546
  • 1524
  • 1338
  • 1116
  • 1030
  • 930
  • 898
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
641

Implementation limits for artificial neural networks

Baker, Thomas Edward 02 1900 (has links) (PDF)
M.S. / Computer Science and Engineering / Before artificial neural network applications become common there must be inexpensive hardware that will allow large networks to be run in real time. It is uncertain how large networks will do when constrained to implementations on architectures of current technology. Some tradeoffs must be made when the network models are implemented efficiently. Three popular artificial neural network models are analyzed. This paper discusses the effects on performance when the models are modified for efficient hardware implementation.
642

Zeitreihenanalyse natuerlicher Systeme mit neuronalen Netzen und

Weichert, Andreas 27 February 1998 (has links)
No description available.
643

Secret sharing using artificial neural network

Alkharobi, Talal M. 15 November 2004 (has links)
Secret sharing is a fundamental notion for secure cryptographic design. In a secret sharing scheme, a set of participants shares a secret among them such that only pre-specified subsets of these shares can get together to recover the secret. This dissertation introduces a neural network approach to solve the problem of secret sharing for any given access structure. Other approaches have been used to solve this problem. However, the yet known approaches result in exponential increase in the amount of data that every participant need to keep. This amount is measured by the secret sharing scheme information rate. This work is intended to solve the problem with better information rate.
644

Hierarchical modeling of multi-scale dynamical systems using adaptive radial basis function neural networks: application to synthetic jet actuator wing

Lee, Hee Eun 30 September 2004 (has links)
To obtain a suitable mathematical model of the input-output behavior of highly nonlinear, multi-scale, nonparametric phenomena, we introduce an adaptive radial basis function approximation approach. We use this approach to estimate the discrepancy between traditional model areas and the multiscale physics of systems involving distributed sensing and technology. Radial Basis Function Networks offers the possible approach to nonparametric multi-scale modeling for dynamical systems like the adaptive wing with the Synthetic Jet Actuator (SJA). We use the Regularized Orthogonal Least Square method (Mark, 1996) and the RAN-EKF (Resource Allocating Network-Extended Kalman Filter) as a reference approach. The first part of the algorithm determines the location of centers one by one until the error goal is met and regularization is achieved. The second process includes an algorithm for the adaptation of all the parameters in the Radial Basis Function Network, centers, variances (shapes) and weights. To demonstrate the effectiveness of these algorithms, SJA wind tunnel data are modeled using this approach. Good performance is obtained compared with conventional neural networks like the multi layer neural network and least square algorithm. Following this work, we establish Model Reference Adaptive Control (MRAC) formulations using an off-line Radial Basis Function Networks (RBFN). We introduce the adaptive control law using a RBFN. A theory that combines RBFN and adaptive control is demonstrated through the simple numerical simulation of the SJA wing. It is expected that these studies will provide a basis for achieving an intelligent control structure for future active wing aircraft.
645

On Data Mining and Classification Using a Bayesian Confidence Propagation Neural Network

Orre, Roland January 2003 (has links)
The aim of this thesis is to describe how a statisticallybased neural network technology, here named BCPNN (BayesianConfidence Propagation Neural Network), which may be identifiedby rewriting Bayes' rule, can be used within a fewapplications, data mining and classification with credibilityintervals as well as unsupervised pattern recognition. BCPNN is a neural network model somewhat reminding aboutBayesian decision trees which are often used within artificialintelligence systems. It has previously been success- fullyapplied to classification tasks such as fault diagnosis,supervised pattern recognition, hiearchical clustering and alsoused as a model for cortical memory. The learning paradigm usedin BCPNN is rather different from many other neural networkarchitectures. The learning in, e.g. the popularbackpropagation (BP) network, is a gradient method on an errorsurface, but learning in BCPNN is based upon calculations ofmarginal and joint prob- abilities between attributes. This isa quite time efficient process compared to, for instance,gradient learning. The interpretation of the weight values inBCPNN is also easy compared to many other networkarchitechtures. The values of these weights and theiruncertainty is also what we are focusing on in our data miningapplication. The most important results and findings in thisthesis can be summarised in the following points:     We demonstrate how BCPNN (Bayesian Confidence PropagationNeural Network) can be extended to model the uncertainties incollected statistics to produce outcomes as distributionsfrom two different aspects: uncertainties induced by sparsesampling, which is useful for data mining; uncertainties dueto input data distributions, which is useful for processmodelling.     We indicate how classification with BCPNN gives highercertainty than an optimal Bayes classifier and betterprecision than a naïve Bayes classifier for limited datasets.     We show how these techniques have been turned into auseful tool for real world applications within the drugsafety area in particular.     We present a simple but working method for doingautomatic temporal segmentation of data sequences as well asindicate some aspects of temporal tasks for which a Bayesianneural network may be useful.     We present a method, based on recurrent BCPNN, whichperforms a similar task as an unsupervised clustering method,on a large database with noisy incomplete data, but muchquicker, with an efficiency in finding patterns comparablewith a well known (Autoclass) Bayesian clustering method,when we compare their performane on artificial data sets.Apart from BCPNN being able to deal with really large datasets, because it is a global method working on collectivestatistics, we also get good indications that the outcomefrom BCPNN seems to have higher clinical relevance thanAutoclass in our application on the WHO database of adversedrug reactions and therefore is a relevant data mining toolto use on the WHO database. Artificial neural network, Bayesian neural network, datamining, adverse drug reaction signalling, classification,learning.
646

An implementation and initial test of generalized radial basis functions

Wettschereck, Dietrich 27 June 1990 (has links)
Generalized Radial Basis Functions were used to construct networks that learn input-output mappings from given data. They are developed out of a theoretical framework for approximation based on regularization techniques and represent a class of three-layer networks similar to backpropagation networks with one hidden layer. A network using Gaussian base functions was implemented and applied to several domains. It was found to perform very well on the two-spirals problem and on the nettalk task. This paper explains what Generalized Radial Basis Functions are, describes the algorithm, its implementation, and the tests that have been conducted. It draws the conclusion that network. implementations using Generalized Radial Basis Functions are a successful approach for learning from examples. / Graduation date: 1991
647

A Neural Model of Call-counting in Anurans

Houtman, David B. 11 October 2012 (has links)
Temporal features in the vocalizations of animals and insects play an important role in a diverse range of species-specific activities such as mate selection, territoriality, and hunting. The neural mechanisms underlying the response to such stimuli remain largely unknown. Two species of anuran amphibian provide a starting point for the investigation of the neurological response to species-specific advertisement calls. Neurons in the anuran midbrain of Rana pipiens and Hyla regilla exhibit an atypical response when presented with a fixed number of advertisement calls. The general response to these calls is mostly inhibitory; only when the correct number of calls is presented at the correct repetition rate will this inhibition be overcome and the neurons reach a spiking threshold. In addition to rate-dependent call-counting, these neurons are sensitive to missed calls: a pause of sufficient duration—the equivalent of two missed calls—effectively resets a neuron to its initial condition. These neurons thus provide a model system for investigating the neural mechanisms underlying call-counting and interval specificity in audition. We present a minimal computational model in which competition between finely-tuned excitatory and inhibitory synaptic currents, combined with a small propagation delay between the two, broadly explains the three key features observed: rate dependence, call counting, and resetting. While limitations in the available data prevent the determination of a single set of parameters, a detailed analysis indicates that these parameters should fall within a certain range of values. Furthermore, while network effects are counter-indicated by the data, the model suggests that recruitment of neurons plays a necessary role in facilitating the excitatory response of counting neurons—although this hypothesis remains untested. Despite these limitations, the model sheds light on the mechanisms underlying the biophysics of counting, and thus provides insight into the neuroethology of amphibians in general.
648

Molecular Mechanisms Regulating Developmental Axon Pruning

Singh, Karun 01 August 2008 (has links)
The formation of neural connections in the mammalian nervous system is a complex process. During development, axons are initially overproduced and compete for limited quantities of target-derived growth factors. Axons which participate in functional circuits and secure appropriate amounts of growth factors are stabilized, while those axons that are either inappropriately connected or do not obtain sufficient concentrations of growth factors are eliminated in a process termed ‘axon pruning’. In this thesis, I examined the mechanisms that regulate pruning of peripheral, NGF-dependent sympathetic neurons that project to the eye. I determined that pruning of these projections in vivo requires the p75 neurotrophin receptor (p75NTR) and synthesis of brain-derived neurotrophic factor (BDNF) from the activity-dependent exon IV promoter. Furthermore, analysis of an in vitro model of axon competition, which is regulated by the interplay between nerve growth factor (NGF) and neuronal activity, revealed that p75NTR and BDNF are also essential for axon competition in culture. In this model, in the presence of NGF, neural activity confers a competitive growth advantage to stimulated, active axons by enhancing downstream TrkA (NGF receptor) signaling locally in axons. More interestingly, the unstimulated, inactive axons deriving from the same and neighboring neurons acquire a "growth disadvantage" due to secreted BDNF acting through p75NTR, which induces axon degeneration by suppressing TrkA signaling that is essential for axonal integrity. These data support a model where, during developmental axon competition, successful axons secrete BDNF in an activity-dependent fashion which activates p75NTR on unsuccessful neighboring axons, suppressing TrkA signaling, and ultimately promoting pruning by a degenerative mechanism.
649

Investigation in modeling a load-sensing pump using dynamic neural unit based dynamic neural networks

Li, Yuwei 15 January 2007
Because of the highly complex structure of the load-sensing pump, its compensators and controlling elements, simulation of load-sensing pump system pose many challenges to researchers. One way to overcome some of the difficulties with creating complex computer model is the use of black box approach to create an approximation of the system behaviour by analyzing input/output relationships. That means the details of the physical phenomena are not so much of concern in the black box approach. Neural network can be used to implement the black box concept for system identification and it is proven that the neural network have the ability to model very complex behaviour and there is a well defined set of neural and neural network structures. Previous studies have shown the problems and limitations in dynamic system modeling using static neuron based neural networks. Some new neuron structures, Dynamic Neural Units (DNUs), have been developed which open a new area to the research associated with the system modelling.<p>The overall objective of this research was to investigate the feasibility of using a dynamic neural unit (DNU) based dynamic neural network (DNN) in modeling a hydraulic component (specifically a load-sensing pump), and the model could be used in a simulation with any other required component model to aid in hydraulic system design. To be truly representative of the component, the neural network model must be valid for both the steady state and the transient response. Due to three components (compensator, pump and control valve) in a load sensing pump system, there were three different pump model structures (the pump, compensator and valve model, the compensator and pump model, and the pump only model) from the practical point of view, and they were analysed thoroughly in this study. In this study, the DNU based DNN was used to model a pump only model which was a portion of a complete load sensing pump. After the trained DNN was tested with a wide variety of system inputs and due to the steady state error illustrated by the trained DNN, compensation equation approach and DNN and SNN combination approach were then adopted to overcome the steady state deviation. <p>It was verified, through this work, that the DNU based DNN can capture the dynamics of a nonlinear system, and the DNN and SNN combination can eliminate the steady state error which was generated by the trained DNN. <p>The first major contribution of this research was in investigating the feasibility of using the DNN to model a nonlinear system and eliminating the error accumulation problem encountered in the previous work. The second major contribution is exploring the combination of DNN and SNN to make the neural network model valid for both steady state and the transient response.
650

The Implications of Developmental and Evolutionary Relationships between Pancreatic Beta-cells and Neurons

Arntfield, Margot Elinor 06 December 2012 (has links)
A pancreatic stem cell could provide the tissue necessary for widespread β-cell transplantation therapy for diabetes. It is disputed whether pancreatic stem cells or β-cell replication are responsible for maintenance and regeneration of endocrine cells. Evidence presented here shows that pancreatic stem cells express insulin and produce multiple endocrine, exocrine and neural cells in vitro and in vivo. The human pancreas also contains stem cells that produce functional β-cells capable of reducing blood sugar levels in a diabetic mouse. Initial studies of pancreatic stem cells grown clonally in vitro indicated that they produced large numbers of neurons, suggesting they may be derived from the neural crest. Evidence shows that there are at least two distinct developmental origins for stem cells in the pancreas; one from the pancreatic lineage that produces endocrine and exocrine cells and one from the neural crest lineage that produces neurons and Schwann cells. Furthermore, pancreatic stem cells require the developmental transcription factor, Pax6, for endocrine cell formation suggesting they are using expected differentiation pathways. There is an interesting evolutionary connection between pancreatic β-cells and neurons which was applied to the derivation of pancreatic stem cells from human embryonic stem cells by using a clonal neural stem cell assay. These pancreatic stem cells express pancreatic and neural markers, self-renew and differentiate into insulin-expressing cells. The overexpression of SOX17 in these cells increases stem cell formation and self-renewal but inhibits differentiation. Overall I will show that there is a genuine stem cell in the adult mammalian pancreas capable of producing functional β-cells, that this stem cell is derived from the pancreatic developmental lineage but the pancreas also contains stem cells from the neural crest lineage, and that the neural stem cell assays that have identified these adult stem cells can be applied to the derivation of a pancreatic stem cell from hESCs.

Page generated in 0.2306 seconds