Spelling suggestions: "subject:"neuralnetwork"" "subject:"neuralsnetworks""
21 |
A CURRENT-BASED WINNER-TAKE-ALL (WTA) CIRCUIT FOR ANALOG NEURAL NETWORK ARCHITECTURERijal, Omkar 01 December 2022 (has links)
The Winner-Take-All (WTA) is an essential neural network operation for locating the most active neuron. Such a procedure has been extensively used in larger application areas. The Winner-Take-All circuit selects the maximum of the inputs inhibiting all other nodes. The efficiency of the analog circuits may well be considerably higher than the digital circuits. Also, analog circuits’ design footprint and processing time can be significantly small. A current-based Winner-Take-All circuit for analog neural networks is presented in this research. A compare and pass (CAP) mechanism has been used, where each input pair is compared, and the winner is selected and passed to another level. The inputs are compared by a sense amplifier which generates high and low voltage signals at the output node. The voltage signal of the sense amplifier is used to select the winner and passed to another level using logic gates. Also, each winner follows a sequence of digital bits to be selected. The findings of the SPICE simulation are also presented. The simulation results on the MNIST, Fashion-MNIST, and CIFAR10 datasets for the memristive deep neural network model show the significantly accurate result of the winner class with an average difference of input and selected winner output current of 0.00795uA, 0.01076uA and 0.02364uA respectively. The experimental result with transient noise analysis is also presented.
|
22 |
Neuronal mechanisms underlying appetitive learning in the pond snail Lymnaea stagnalisStaras, Kevin January 1997 (has links)
1. Lymnaea was the subject of an established behavioural conditioning paradigm where pairings of a neutral lip tactile stimulus (CS) and a sucrose food stimulus (US) results in a conditioned feeding response to the CS alone. The current objective was to dissect trained animals and examine electrophysiological changes in the feeding circuitry which may underlie this learning. 2. Naive subjects were used to confirm that US and CS responses in vivo persisted in vitro since this is a pre-requisite for survival of a learned memory trace. This required the development of a novel semi-intact preparation facilitating CS presentation and simultaneous access to the CNS. 3. The nature and function of the CS response was investigated using naive animals. Intracellular recordings revealed that the tactile CS evokes specific, consistent synaptic responses in identified feeding neurons. Extracellular recording techniques and anatomical investigations showed that these responses occurred through a direct pathway linking the lips to the feeding circuitry. A buccal neuron was characterized which showed lip tactile responses and supplied synaptic inputs to feeding neurons indicating that it was a second-order mechanosensory neuron involved in the CS pathway. 4. Animals trained using the behavioural conditioning paradigm were tested for conditioned responses and subsequently dissected~ Intracellular recording from specific identified feeding motoneurons revealed that CS presentation resulted in significant activation of the feeding network compared to control subjects. This activation was combined both with an increase in the amplitude of a specific synaptic input and an elevation in the extracellular spike activity recorded from a feeding-related connective. A neuronal mechanism to account for these findings is presented. 5. The role of motoneurons in the feeding circuit was reassessed. It is demonstrated, contrary to the current model, that muscular motoneurons have an important contribution during feeding rhythms through previously unreported electrotonic CPG connections.
|
23 |
Acquisition and analysis of heart sound dataHebden, John Edward January 1997 (has links)
No description available.
|
24 |
Vibration design by means of structural modificationAkbar, Shahzad January 1998 (has links)
No description available.
|
25 |
Likelihood analysis of the multi-layer perceptron and related latent variable modelsFoxall, Robert John January 2001 (has links)
No description available.
|
26 |
A dynamical study of the generalised delta ruleButler, Edward January 2000 (has links)
No description available.
|
27 |
Investigation in the application of complex algorithms to recurrent generalized neural networks for modeling dynamic systemsYackulic, Richard Matthew Charles 04 April 2011
<p>Neural networks are mathematical formulations that can be "trained" to perform certain functions. One particular application of these networks of interest in this thesis is to "model" a physical system using only input-output information. The physical system and the neural network are subjected to the same inputs. The neural network is then trained to produce an output which is the same as the physical system for any input. This neural network model so created is essentially a "blackbox" representation of the physical system. This approach has been used at the University of Saskatchewan to model a load sensing pump (a component which is used to create a constant flow rate independent of variations in pressure downstream of the pump). These studies have shown the versatility of neural networks for modeling dynamic and non-linear systems; however, these studies also indicated challenges associated with the morphology of neural networks and the algorithms to train them. These challenges were the motivation for this particular research.</p>
<p>Within the Fluid Power Research group at the University of Saskatchewan, a "global" objective of research in the area of load sensing pumps has been to apply dynamic neural networks (DNN) in the modeling of loads sensing systems.. To fulfill the global objective, recurrent generalized neural network (RGNN) morphology along with a non-gradient based training approach called the complex algorithm (CA) were chosen to train a load sensing pump neural network model. However, preliminary studies indicated that the combination of recurrent generalized neural networks and complex training proved ineffective for even second order single-input single-output (SISO) systems when the initial synaptic weights of the neural network were chosen at random.</p>
<p>Because of initial findings the focus of this research and its objectives shifted towards understanding the capabilities and limitations of recurrent generalized neural networks and non-gradient training (specifically the complex algorithm). To do so a second-order transfer function was considered from which an approximate recurrent generalized neural network representation was obtained. The network was tested under a variety of initial weight intervals and the number of weights being optimized. A definite trend was noted in that as the initial values of the synaptic weights were set closer to the "exact" values calculated for the system, the robustness of the network and the chance of finding an acceptable solution increased. Two types of training signals were used in the study; step response and frequency based training. It was found that when step response and frequency based training were compared, step response training was shown to produce a more generalized network.</p>
<p>Another objective of this study was to compare the use of the CA to a proven non-gradient training method; the method chosen was genetic algorithm (GA) training. For the purposes of the studies conducted two modifications were done to the GA found in the literature. The most significant change was the assurance that the error would never increase during the training of RGNNs using the GA. This led to a collapse of the population around a specific point and limited its ability to obtain an accurate RGNN.</p>
<p>The results of the research performed produced four conclusions. First, the robustness of training RGNNs using the CA is dependent upon the initial population of weights. Second, when using GAs a specific algorithm must be chosen which will allow the calculation of new population weights to move freely but at the same time ensure a stable output from the RGNN. Third, when the GA used was compared to the CA, the CA produced more generalized RGNNs. And the fourth is based upon the results of training RGNNs using the CA and GA when step response and frequency based training data sets were used, networks trained using step response are more generalized in the majority of cases.</p>
|
28 |
Investigation in the application of complex algorithms to recurrent generalized neural networks for modeling dynamic systemsYackulic, Richard Matthew Charles 04 April 2011 (has links)
<p>Neural networks are mathematical formulations that can be "trained" to perform certain functions. One particular application of these networks of interest in this thesis is to "model" a physical system using only input-output information. The physical system and the neural network are subjected to the same inputs. The neural network is then trained to produce an output which is the same as the physical system for any input. This neural network model so created is essentially a "blackbox" representation of the physical system. This approach has been used at the University of Saskatchewan to model a load sensing pump (a component which is used to create a constant flow rate independent of variations in pressure downstream of the pump). These studies have shown the versatility of neural networks for modeling dynamic and non-linear systems; however, these studies also indicated challenges associated with the morphology of neural networks and the algorithms to train them. These challenges were the motivation for this particular research.</p>
<p>Within the Fluid Power Research group at the University of Saskatchewan, a "global" objective of research in the area of load sensing pumps has been to apply dynamic neural networks (DNN) in the modeling of loads sensing systems.. To fulfill the global objective, recurrent generalized neural network (RGNN) morphology along with a non-gradient based training approach called the complex algorithm (CA) were chosen to train a load sensing pump neural network model. However, preliminary studies indicated that the combination of recurrent generalized neural networks and complex training proved ineffective for even second order single-input single-output (SISO) systems when the initial synaptic weights of the neural network were chosen at random.</p>
<p>Because of initial findings the focus of this research and its objectives shifted towards understanding the capabilities and limitations of recurrent generalized neural networks and non-gradient training (specifically the complex algorithm). To do so a second-order transfer function was considered from which an approximate recurrent generalized neural network representation was obtained. The network was tested under a variety of initial weight intervals and the number of weights being optimized. A definite trend was noted in that as the initial values of the synaptic weights were set closer to the "exact" values calculated for the system, the robustness of the network and the chance of finding an acceptable solution increased. Two types of training signals were used in the study; step response and frequency based training. It was found that when step response and frequency based training were compared, step response training was shown to produce a more generalized network.</p>
<p>Another objective of this study was to compare the use of the CA to a proven non-gradient training method; the method chosen was genetic algorithm (GA) training. For the purposes of the studies conducted two modifications were done to the GA found in the literature. The most significant change was the assurance that the error would never increase during the training of RGNNs using the GA. This led to a collapse of the population around a specific point and limited its ability to obtain an accurate RGNN.</p>
<p>The results of the research performed produced four conclusions. First, the robustness of training RGNNs using the CA is dependent upon the initial population of weights. Second, when using GAs a specific algorithm must be chosen which will allow the calculation of new population weights to move freely but at the same time ensure a stable output from the RGNN. Third, when the GA used was compared to the CA, the CA produced more generalized RGNNs. And the fourth is based upon the results of training RGNNs using the CA and GA when step response and frequency based training data sets were used, networks trained using step response are more generalized in the majority of cases.</p>
|
29 |
The intertemporary studies of financial crisis prediction modelKung, Chih-Ming 29 June 2000 (has links)
The purpose of this article is try to find the efficient factor that affect corporate's financial structure.
|
30 |
The Speech Recognition System using Neural NetworksChen, Sung-Lin 06 July 2002 (has links)
This paper describes an isolated-word and speaker-independent Mandarin digit speech recognition system based on Backpropagation Neural Networks(BPNN). The recognition rate will achieve up to 95%. When the system was applied to a new user with adaptive modification method, the recognition rate will be higher than 99%. In order to implement the speech recognition system on Digital Signal Processors (DSP) we use a neuron-cancellation rule in accordance with BPNN. The system will cancel about 1/3 neurons and reduce 20%¡ã40% memory size under the rule. However, the recognition rate can still achiever up to 85%. For the output structure of the BPNN, we present a binary-code to supersede the one-to-one model. In addition, we use a new ideal about endpoint detection algorithm for the recoding signals. It can avoid disturbance without complex computations.
|
Page generated in 0.0495 seconds