Spelling suggestions: "subject:" beural networks"" "subject:" aneural networks""
91 |
Recurrent neural networks in the chemical process industriesLourens, Cecil Albert 04 September 2012 (has links)
M.Ing. / This dissertation discusses the results of a literature survey into the theoretical aspects and development of recurrent neural networks. In particular, the various architectures and training algorithms developed for recurrent networks are discussed. The various characteristics of importance for the efficient implementation of recurrent neural networks to model dynamical nonlinear processes have also been investigated and are discussed. Process control has been identified as a field of application where recurrent networks may play an important role. The model based adaptive control strategy is briefly introduced and the application of recurrent networks to both the direct- and the indirect adaptive control strategy highlighted. In conclusion, the important areas of future research for the successful implementation of recurrent networks in adaptive nonlinear control are identified
|
92 |
Formation of the complex neural networks under multiple constraintsChen, Yuhan 01 January 2013 (has links)
No description available.
|
93 |
A model of adaptive invarianceWood, Jeffrey James January 1995 (has links)
This thesis is about adaptive invariance, and a new model of it: the Group Representation Network. We begin by discussing the concept of adaptive invariance. We then present standard background material, mostly from the fields of group theory and neural networks. Following this we introduce the problem of invariant pattern recognition and describe a number of methods for solving various instances of it. Next, we define the Symmetry Network, a connectionist model of permutation invariance, and we develop some new theory of this model. We also extend the applicability of the Symmetry Network to arbitrary finite group actions. We then introduce the Group Representation Network (GRN) as an abstract model, with which in principle we can construct concomitants between arbitrary group representations. We show that the GRN can be regarded as a neural network model, and that it includes the Symmetry Network as a submodel. We apply group representation theory to the analysis of GRNs. This yields general characterizations of the allowable activation functions in a GRN and of their weight matrix structure. We examine various generalizations and restricted cases of the GRN model, and in particular look at the construction of GRNs over infinite groups. We then consider the issue of a GRN's discriminability, which relates to the problem of graph isomorphism. We look next at the computational abilities of the GRN, and postulate that it is capable of approximately computing any group concomitant. We show constructively that any polynomial concomitant can be computed by a GRN. We also prove that a variety of standard models for invariant pattern recognition can be viewed as special instances of the GRN model. Finally, we propose that the GRN model may be biologically plausible and give suggestions for further research.
|
94 |
The combination of AI modelling techniques for the simulation of manufacturing processesKorn, Stefan January 1998 (has links)
No description available.
|
95 |
COLANDER: Convolving Layer Network Derivation for E-recommendationsTimokhin, Dmitriy 01 June 2021 (has links) (PDF)
Many consumer facing companies have large scale data sets that they use to create recommendations for their users. These recommendations are usually based off information the company has on the user and on the item in question. Based on these two sets of features, models are created and tuned to produce the best possible recommendations. A third set of data that exists in most cases is the presence of past interactions a user may have had with other items. The relationships that a model can identify between this information and the other two types of data, we believe, can improve the prediction of how a user may interact with the given item. We propose a method that can inform the model of these relationships during the training phase while only relying on the user and item data during the prediction phase. Using ideas from convolutional neural networks (CNN) and collaborative filtering approaches, our method manipulated the weights in the first layer of our network design in a way that achieves this goal.
|
96 |
A neural network approach to burst detectionMounce, Steve R., Day, Andrew J., Wood, Alastair S., Khan, Asar, Widdop, Peter D., Machell, James January 2002 (has links)
No
|
97 |
The implementation of generalised models of magnetic materials using artificial neural networksSaliah-Hassane, Hamadou 09 1900 (has links)
No description available.
|
98 |
On the trainability, stability, representability, and realizability of artificial neural networksWang, Jun January 1991 (has links)
No description available.
|
99 |
On functional dimension of univariate ReLU neural networks:Liang, Zhen January 2024 (has links)
Thesis advisor: Elisenda Grigsby / The space of parameter vectors for a feedforward ReLU neural networks with any fixed architecture is a high dimensional Euclidean space being used to represent the associated class of functions. However, there exist well-known global symmetries and extra poorly-understood hidden symmetries which do not change the neural network function computed by network with different parameter settings. This makes the true dimension of the space of function to be less than the number of parameters. In this thesis, we are interested in the structure of hidden symmetries for neural networks with various parameter settings, and particular neural networks with architecture \((1,n,1)\). For this class of architectures, we fully categorize the insufficiency of local functional dimension coming from activation patterns and give a complete list of combinatorial criteria guaranteeing a parameter setting admits no hidden symmetries coming from slopes of piecewise linear functions in the parameter space. Furthermore, we compute the probability that these hidden symmetries arise, which is rather small compared to the difference between functional dimension and number of parameters. This suggests the existence of other hidden symmetries. We investigate two mechanisms to explain this phenomenon better. Moreover, we motivate and define the notion of \(\varepsilon\)-effective activation regions and \(\varepsilon\)-effective functional dimension. We also experimentally estimate the difference between \(\varepsilon\)-effective functional dimension and true functional dimension for various parameter settings and different \(\varepsilon\). / Thesis (PhD) — Boston College, 2024. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Mathematics.
|
100 |
The role of interpretable neural architectures: from image classification to neural fieldsSambugaro, Zeno 07 1900 (has links)
Neural networks have demonstrated outstanding capabilities, surpassing human expertise across diverse tasks. Despite these advances, their widespread adoption is hindered by the complexity of interpreting their decision-making processes. This lack of transparency raises concerns in critical areas such as autonomous mobility, digital security, and healthcare. This thesis addresses the critical need for more interpretable and efficient neural-based technologies, aiming to enhance their transparency and lower their memory footprint.
In the first part of this thesis we introduce Agglomerator and Agglomerator++, two frameworks that embody the principles of hierarchical representation to improve the understanding and interpretability of neural networks. These models aim to bridge the cognitive gap between human visual perception and computational models, effectively enhancing the capability of neural networks to dynamically represent complex data.
The second part of the manuscript focuses on addressing the lack of spatial coherency and thereby efficiency of the latest fast-training neural field representations. To address this limitation we propose Lagrangian Hashing, a novel method that combines the efficiency of Eulerian grid-based representations with the spatial flexibility of Lagrangian point-based systems. This method extends the foundational work of hierarchical hashing, allowing for an adaptive allocation of the representation budget. In this way we effectively preserve the coherence of the neural structure with respect to the reconstructed 3D space.
Within the context of 3D reconstruction we also conduct a comparative evaluation of the NeRF based reconstruction methodologies against traditional photogrammetry, to assess their usability in practical, real-world settings.
|
Page generated in 0.0482 seconds