Spelling suggestions: "subject:"beural networks (computer)"" "subject:"aneural networks (computer)""
311 |
An artificial neural network for robust shape recognition in real timeWestmacott, Jason January 2000 (has links)
Traditional Automatic Target Recognition (ATR) Systems often fail when faced with complex recognition tasks involving noise, clutter, and complexity. This work is concerned with implementing a real time, vision based ATR system using an Artificial Neural Network (ANN) to overcome some of the shortcomings of traditional ATR systems. The key issues of this work are vision, pattern recognition and artificial neural networks. The ANN presented in this thesis is inspired by Prof. Stephen Grossberg's work in Adaptive Resonance Theory (ART) and neurophysiological data on the primate brain. An ANN known as Selective Attention Adaptive Resonance Theory (SAART) (Lozo, 1995, 1997) forms the basis of this work. SAART, which is based on Grossberg's ART, models the higher levels of visual processing in the primate brain to provide an ATR system capable of learning and recognising targets in cluttered and complex backgrounds. This thesis contributes an extension to the SAART model to allow a degree of tolerance to imperfections including distortion, changes in size, orientation, or position. In addition to this extension, it is also demonstrated how modulated neural layers can be used for image filtering. A possible extension of the architecture for multi-sensory environments is proposed as a foundation for future research. / Thesis (MEng)--University of South Australia, 2000
|
312 |
An efficient algorithm for extracting Boolean functions from linear threshold gates, and a synthetic decompositional approach to extracting Boolean functions from feedforward neural networks with arbitrary transfer functionsPeh, Lawrence T. W. January 2000 (has links)
[Formulae and special characters can only be approximated here. Please see the pdf version of the Abstract for an accurate reproduction.] Artificial neural networks are universal function approximators that represent functions subsymbolically by weights, thresholds and network topology. Naturally, the representation remains the same regardless of the problem domain. Suppose a network is applied to a symbolic domain. It is difficult for a human to dynamically construct the symbolic function from the neural representation. It is also difficult to retrain networks on perturbed training vectors, to resume training with different training sets, to form a new neuron by combining trained neurons, and to reason with trained neurons. Even the original training set does not provide a symbolic representation of the function implemented by the trained network because the set may be incomplete or inconsistent, and the training phase may terminate with residual errors. The symbolic information in the network would be more useful if it is available in the language of the problem domain. Algorithms that translate the subsymbolic neural representation to a symbolic representation are called extraction algorithms. I argue that extraction algorithms that operate on single-output, layered feedforward networks are sufficient to analyse the class of multiple-output networks with arbitrary connections, including recurrent networks. The translucency dimensions of the ADT taxonomy for feedforward networks classifies extraction approaches as pedagogical, eclectic, or decompositional. Pedagogical and eclectic approaches typically use a symbolic learning algorithm that takes the network’s input-output behaviour as its raw data. Both approaches construct a set of input patterns and observe the network’s output for each pattern. Eclectic and pedagogical approaches construct the input patterns respectively with and without reference to the network’s internal information. These approaches are suitable for approximating the network’s function using a probably-approximately-correct (PAC) or similar framework, but they are unsuitable for constructing the network’s complete function. Decompositional approaches use internal information from a network more directly to produce the network’s function in symbolic form. Decompositional algorithms have two components. The first component is a core extraction algorithm that operates on a single neuron that is assumed to implement a symbolic function. The second component provides the superstructure for the first. It consists of a decomposition rule for producing such neurons and a recomposition rule for symbolically aggregating the extracted functions into the symbolic function of the network. This thesis makes contributions to both components for Boolean extraction. I introduce a relatively efficient core algorithm called WSX based on a novel Boolean form called BvF. The algorithm has a worst case complexity of O(2 to power of n divided by the square root of n) for a neuron with n inputs, but in all cases, its complexity can also be expressed as O(l) with an O(n) precalculation phase, where l is the length of the extracted expression in terms of the number of symbols it contains. I extend WSX for approximate extraction (AWSX) by introducing an interval about the neuron’s threshold. Assuming that the input patterns far from the threshold are more symbolically significant to the neuron than those near the threshold, ASWX ignores the neuron’s mappings for the symbolically input patterns, remapping them as convenient for efficiency. In experiments, this dramatically decreased extraction time while retaining most of the neuron’s mappings for the training set. Synthetic decomposition is this thesis’ contribution to the second component of decompositional extraction. Classical decomposition decomposes the network into its constituent neurons. By extracting symbolic functions from these neurons, classical decomposition assumes that the neurons implement symbolic functions, or that approximating the subsymbolic computation in the neurons with symbolic computation does not significantly affect the network’s symbolic function. I show experimentally that this assumption does not always hold. Instead of decomposing a network into its constituent neurons, synthetic decomposition uses constraints in the network that have the same functional form as neurons that implement Boolean functions; these neurons are called synthetic neurons. I present a starting point for constructing synthetic decompositional algorithms, and proceed to construct two such algorithms, each with a different strategy for decomposition and recomposition. One of the algorithms, ACX, works for networks with arbitrary monotonic transfer functions, so long as an inverse exists for the functions. It also has an elegant geometric interpretation that leads to meaningful approximations. I also show that ACX can be extended to layered networks with any number of layers.
|
313 |
A decision making model for evaluating suppliers by multi-layer feed forward neural networksGolmohammadi, Davood. January 2007 (has links)
Thesis (Ph. D.)--West Virginia University, 2007. / Title from document title page. Document formatted into pages; contains xiii, 200 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 143-151).
|
314 |
A methodology for the modeling of forced dynamical systems for time series measurements using time-delay neural networks /Zolock, John D. January 1900 (has links)
Thesis (Ph.D.)--Tufts University, 2005. / Adviser: Robert Greif. Submitted to the Dept. of Mechanical Engineering. Includes bibliographical references (leaves 231-237). Access restricted to members of the Tufts University community. Also available via the World Wide Web;
|
315 |
Neural networks for data fusion /Wang, Fengzhen. January 1997 (has links)
Thesis (Ph.D.) -- McMaster University, 1997. / Includes bibliographical references Also available via World Wide Web.
|
316 |
An accurate CMOS four-quadrant analog multiplierGottiparthy, Ramraj, Wilamowski, Bogdan M. January 2006 (has links) (PDF)
Thesis(M.S.)--Auburn University, 2006. / Abstract. Vita. Includes bibliographic references.
|
317 |
Computational prediction of antisense oligonucleotides and siRNAs /Chalk, Alistair, January 2005 (has links)
Diss. (sammanfattning) Stockholm : Karolinska institutet, 2005. / Härtill 6 uppsatser.
|
318 |
Improving machine learning through oracle learning /Menke, Joshua E. January 2007 (has links) (PDF)
Thesis (Ph. D.)--Brigham Young University. Dept. of Computer Science, 2007. / Includes bibliographical references (p. 203-209).
|
319 |
Rainfall estimation from satellite infrared imagery using artificial neural networksHsu, Kuo-lin, January 1996 (has links) (PDF)
Thesis (Ph.D. - Hydrology and Water Resources)--University of Arizona. / Includes bibliographical references (leaves 228-234).
|
320 |
Multiclassifier neural networks for handwritten character recognitionChai, Sin-Kuo. January 1995 (has links)
Thesis (Ph. D.)--Ohio University, March, 1995. / Title from PDF t.p.
|
Page generated in 0.0894 seconds