Spelling suggestions: "subject:"beural"" "subject:"aneural""
1 |
Signal transduction in neural systems: role of excitability王宇靑, Wang, Yuqing. January 2000 (has links)
published_or_final_version / Physics / Doctoral / Doctor of Philosophy
|
2 |
Using self-organising catalytic networks in adaptive systemsHuening, Harald Frank January 2001 (has links)
No description available.
|
3 |
A Fuzzy/Neural Approach to Cost Prediction with Small Data SetsDanker-McDermot, Holly 21 May 2004 (has links)
The project objective in this work is to create an accurate cost estimate for NASA engine tests at the John C. Stennis Space Center testing facilities using various combinations of fuzzy and neural systems. The data set available for this cost prediction problem consists of variables such as test duration, thrust, and many other similar quantities, unfortunately it is small and incomplete. The first method implemented to perform this cost estimate uses the locally linear embedding (LLE) algorithm for a nonlinear reduction method that is then put through an adaptive network based fuzzy inference system (ANFIS). The second method is a two stage system that uses various ANFIS with either single or multiple inputs for a cost estimate whose outputs are then put through a backpropagation trained neural network for the final cost prediction. Finally, method 3 uses a radial basis function network (RBFN) to predict the engine test cost.
|
4 |
Dynamic neural network-based feedback linearization of electrohydraulic suspension systemsDangor, Muhammed 11 September 2014 (has links)
Resolving the trade-offs between suspension travel, ride comfort, road holding,
vehicle handling and power consumptions is the primary challenge in designing
Active-Vehicle-Suspension-Systems (AVSS). Controller tuning with global
optimization techniques is proposed to realise the best compromise between these
con
icting criteria. Optimization methods adapted include
Controlled-Random-Search (CRS), Differential-Evolution (DE), Genetic-Algorithm
(GA), Particle-Swarm-Optimization (PSO) and Pattern-Search (PS). Quarter-car
and full-car nonlinear AVSS models that incorporate electrohydraulic actuator
dynamics are designed. Two control schemes are proposed for this investigation.
The first is the conventional Proportional-Integral-Derivative (PID) control, which
is applied in a multi-loop architecture to stabilise the actuator and manipulate the
primary control variables. Global optimization-based tuning achieved enhanced
responses in each aspect of PID-based AVSS performance and a better resolve in
con
icting criteria, with DE performing the best. The full-car PID-based AVSS
was analysed for DE as well as modi ed variants of the PSO and CRS. These
modified methods surpassed its predecessors with a better performance index and
this was anticipated as they were augmented to permit for e cient exploration of
the search space with enhanced
exibility in the algorithms. However, DE still
maintained the best outcome in this aspect. The second method is indirect
adaptive dynamic-neural-network-based-feedback-linearization (DNNFBL), where
neural networks were trained with optimization algorithms and later feedback
linearization control was applied to it. PSO generated the most desirable results,
followed by DE. The remaining approaches exhibited signi cantly weaker results
for this control method. Such outcomes were accredited to the nature of the DE
and PSO algorithms and their superior search characteristics as well as the nature
of the problem, which now had more variables. The adaptive nature and ability to
cancel system nonlinearities saw the full-car PSO-based DNNFBL controller
outperform its PID counterpart. It achieved a better resolve between performance
criteria, minimal chatter, superior parameter sensitivity, and improved suspension
travel, roll acceleration and control force response.
|
5 |
Quadratic function neural networks.January 1991 (has links)
by Leung Chi Sing. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1991. / Bibliography: leaves 84-87. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Definition of Neural Networks --- p.3 / Chapter 1.2 --- Processing Elements ( Neurons ) and Activation Functions --- p.4 / Chapter 1.3 --- Topology --- p.7 / Chapter 1.4 --- Cross-count --- p.9 / Chapter 1.5 --- Learning Rules --- p.10 / Chapter 1.6 --- Categories of Neural Networks --- p.11 / Chapter 2 --- Rotation Transformations --- p.13 / Chapter 2.1 --- 2-Dimensional Rotations --- p.13 / Chapter 2.2 --- High Dimensional Rotations --- p.15 / Chapter 2.3 --- Hardware Implementation --- p.17 / Chapter 2.3.1 --- 2-Dimensional Rotation Block (2DRB) --- p.18 / Chapter 2.3.2 --- High Dimensional Rotation Block --- p.20 / Chapter 3 --- Rotation Qaudratic Function Neural Network (RQFN) --- p.24 / Chapter 3.1 --- Classical Quadratic Function Neurons (QFN) --- p.25 / Chapter 3.2 --- Rotation Quadratic Function Neural Network --- p.27 / Chapter 3.3 --- Learning Rule --- p.33 / Chapter 3.4 --- Comparison between RQFN and QFN --- p.39 / Chapter 3.4.1 --- Delay --- p.39 / Chapter 3.4.2 --- Fan-in --- p.39 / Chapter 3.4.3 --- Geometric interpretation --- p.40 / Chapter 3.4.4 --- Complexity --- p.40 / Chapter 3.5 --- Simulations --- p.42 / Chapter 3.5.1 --- XOR Test --- p.42 / Chapter 3.5.2 --- Simple Two-dimensional Test --- p.47 / Chapter 3.5.3 --- Three-dimensional Test --- p.50 / Chapter 3.5.4 --- Separated learning Test --- p.52 / Chapter 4 --- Enhanced RQFN --- p.56 / Chapter 4.1 --- Many-Class RQFN (MCRQFN) --- p.56 / Chapter 4.1.1 --- Topology of MCRQFN --- p.57 / Chapter 4.1.2 --- Learning Algorithm of MCRQFN --- p.57 / Chapter 4.2 --- Application Experiment --- p.59 / Chapter 4.2.1 --- Introduction --- p.59 / Chapter 4.2.2 --- Feature Extraction --- p.60 / Chapter 4.2.3 --- Configuration --- p.62 / Chapter 4.2.4 --- Experiment Results --- p.63 / Chapter 4.3 --- Generalized MCRQFN (GMCRQFN) --- p.66 / Chapter 4.3.1 --- Topology of GMCRQFN --- p.66 / Chapter 4.3.2 --- Learning Algorithm of GMCRQFN --- p.67 / Chapter 4.3.3 --- Simulations --- p.69 / Chapter 5 --- Conclusion --- p.82 / Bibliography --- p.87
|
6 |
Signal transduction in neural systems : role of excitability /Wang, Yuqing. January 2000 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2000. / Includes bibliographical references (leaves iii-iv, 101-111).
|
7 |
The distribution and metabolism of acetylcholine receptors on skeletal muscle fibers during development of the neuromuscular junctionBurden, Steven Jay. January 1977 (has links)
Thesis--Wisconsin. / Vita. Includes bibliographical references (leaves 132-142).
|
8 |
Activity propagation in two-dimensional neuronal networksKane, Abdoul, January 2005 (has links)
Thesis (Ph. D.)--Ohio State University, 2005. / Title from first page of PDF file. Includes bibliographical references (p. 94-97).
|
9 |
Unsupervised Semantic Segmentation through Cross-Instance Representation SimilarityBishop, Griffin R. 13 May 2020 (has links)
Semantic segmentation methods using deep neural networks typically require huge volumes of annotated data to train properly. Due to the expense of collecting these pixel-level dataset annotations, the problem of semantic segmentation without ground-truth labels has been recently proposed. Many current approaches to unsupervised semantic segmentation frame the problem as a pixel clustering task, and in particular focus heavily on color differences between image regions. In this paper, we explore a weakness to this approach: By focusing on color, these approaches do not adequately capture relationships between similar objects across images. We present a new approach to the problem, and propose a novel architecture that captures the characteristic similarities of objects between images directly. We design a synthetic dataset to illustrate this flaw in an existing model. Experiments on this synthetic dataset show that our method can succeed where the pixel color clustering approach fails. Further, we show that plain autoencoder models can implicitly capture these cross-instance object relationships. This suggests that some generative model architectures may be viable candidates for unsupervised semantic segmentation even with no additional loss terms.
|
10 |
Modeling Temporal Patterns of Neural Synchronization: Synaptic Plasticity and Stochastic MechanismsZirkle, Joel 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Neural synchrony in the brain at rest is usually variable and intermittent, thus intervals of predominantly synchronized activity are interrupted by intervals of desynchronized activity. Prior studies suggested that this temporal structure of the weakly synchronous activity might be functionally significant: many short desynchronizations may be functionally different from few long desynchronizations, even if the average synchrony level is the same. In this thesis, we use computational neuroscience methods to investigate the effects of (i) spike-timing dependent plasticity (STDP) and (ii) noise on the temporal patterns of synchronization in a simple model. The model is composed of two conductance-based neurons connected via excitatory unidirectional synapses. In (i) these excitatory synapses are made plastic, in (ii) two different types of noise implementation to model the stochasticity of membrane ion channels is considered. The plasticity results are taken from our recently published article, while the noise results are currently being compiled into a manuscript.
The dynamics of this network is subjected to the time-series analysis methods used in prior experimental studies. We provide numerical evidence that both STDP and channel noise can alter the synchronized dynamics in the network in several ways. This depends on the time scale that plasticity acts on and the intensity of the noise. However, in general, the action of STDP and noise in the simple network considered here is to promote dynamics with short desynchronizations (i.e. dynamics reminiscent of that observed in experimental studies) over dynamics with longer desynchronizations.
|
Page generated in 0.0328 seconds