• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 789
  • 117
  • 65
  • 34
  • 18
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 9
  • 4
  • 3
  • 2
  • Tagged with
  • 1160
  • 1160
  • 1160
  • 1137
  • 256
  • 154
  • 141
  • 139
  • 129
  • 123
  • 123
  • 123
  • 122
  • 109
  • 105
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Neural networks approach to process control : the case of processes with long dead times

McLeod, Charles Meredith January 1999 (has links)
Thesis submitted in compliance with the requirements for the Doctor's Degree in Technology: Electrical Engineering, Technikon Natal, 1999. / This study relates to applications of static artificial neural networks (ANNs) to two basic problems of process control: (a) process model identification, and (b) optimal controller tuning. The emphasis is on model identification, where several novel techniques are introduced. A review of the use of ANNs for determining optimal controller settings is included as a logical adjunct which would make the complete system suitable for realisation as a portable or networked system. Three methods for obtaining good approximations for the parameters of first-order processes with long dead time using artificial neural networks (ANNs) are proposed and described. These are termed in this study: time-domain, frequency-domain and model-based methods. In each case the aim was to develop a brief one-shot test that could be applied with minimal disturbance to a closed loop control system. These methods build on existing techniques, but introduce the following novel aspects: 2. The frequency-domain method makes use of the first 81 components of the FFT without further selection as input to a static ANN to yield process parameter estimates. 3. The model-based method uses a simple single-neuron implementation of an ARX model and uses a static ANN to relate process parameter values to the weights of this neuron. In making the analysis, the process input and output are applied repetitively to the neuron model with delays getting progressively larger. Useful effects arising from this are explored. A technique in which ANN training sets are slightly distorted in a random way during training of a radial basis function is developed as part of the time- and frequencydomain methods. The benefits arising from this technique are demonstrated. These experimental ANN-based control methods are evaluated by means of simulations in which accuracy in the presence of measurement noise and performance with higher order processes is measured and analysed. Although the main theme of this study is first-order-plus-dead-time (FOPDT) processes, the full autotuning scheme is tested with some representative higher order processes. Finally, the composition of a complete autotuning scheme is proposed which includes the automatic generation of controller parameters by means of ANN s. / M
202

Assessment of UV index using artificial neural networks

Human, Sep January 2002 (has links)
Dissertation submitted in compliance with the requirements for Master's Degree in Technology: Electrical Engineering (Light Current), Technikon Natal, 2002. / M
203

Computational Complexity of Hopfield Networks

Tseng, Hung-Li 08 1900 (has links)
There are three main results in this dissertation. They are PLS-completeness of discrete Hopfield network convergence with eight different restrictions, (degree 3, bipartite and degree 3, 8-neighbor mesh, dual of the knight's graph, hypercube, butterfly, cube-connected cycles and shuffle-exchange), exponential convergence behavior of discrete Hopfield network, and simulation of Turing machines by discrete Hopfield Network.
204

Adaptive pattern recognition in a real-world environment

Bairaktaris, Dimitrios January 1991 (has links)
This thesis introduces and explores the notion of a real-world environment with respect to adaptive pattern recognition and neural network systems. It then examines the individual properties of a real-world environment and proposes Continuous Adaptation, Persistence of information and Context-sensitive recognition to be the major design criteria a neural network system in a real-world environment should satisfy. Based on these criteria, it then assesses the performance of Hopfield networks and Associative Memory systems and identifies their operational limitations. This leads to the introduction of Randomized Internal Representations, a novel class of neural network systems which stores information in a fully distributed way yet is capable of encoding and utilizing context. It then assesses the performance of Competitive Learning and Adaptive Resonance Theory systems and again having identified their operational weakness, it describes the Dynamic Adaptation Scheme which satisfies all three design criteria for a real-world environment.
205

Enhancement of Deep Neural Networks and Their Application to Text Mining

Unknown Date (has links)
Many current application domains of machine learning and arti cial intelligence involve knowledge discovery from text, such as sentiment analysis, document ontology, and spam detection. Humans have years of experience and training with language, enabling them to understand complicated, nuanced text passages with relative ease. A text classi er attempts to emulate or replicate this knowledge so that computers can discriminate between concepts encountered in text; however, learning high-level concepts from text, such as those found in many applications of text classi- cation, is a challenging task due to the many challenges associated with text mining and classi cation. Recently, classi ers trained using arti cial neural networks have been shown to be e ective for a variety of text mining tasks. Convolutional neural networks have been trained to classify text from character-level input, automatically learn high-level abstract representations and avoiding the need for human engineered features. This dissertation proposes two new techniques for character-level learning, log(m) character embedding and convolutional window classi cation. Log(m) embedding is a new character-vector representation for text data that is more compact and memory e cient than previous embedding vectors. Convolutional window classi cation is a technique for classifying long documents, i.e. documents with lengths exceeding the input dimension of the neural network. Additionally, we investigate the performance of convolutional neural networks combined with long short-term memory networks, explore how document length impacts classi cation performance and compare performance of neural networks against non-neural network-based learners in text classi cation tasks. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
206

Parallel Distributed Deep Learning on Cluster Computers

Unknown Date (has links)
Deep Learning is an increasingly important subdomain of arti cial intelligence. Deep Learning architectures, arti cial neural networks characterized by having both a large breadth of neurons and a large depth of layers, bene ts from training on Big Data. The size and complexity of the model combined with the size of the training data makes the training procedure very computationally and temporally expensive. Accelerating the training procedure of Deep Learning using cluster computers faces many challenges ranging from distributed optimizers to the large communication overhead speci c to a system with o the shelf networking components. In this thesis, we present a novel synchronous data parallel distributed Deep Learning implementation on HPCC Systems, a cluster computer system. We discuss research that has been conducted on the distribution and parallelization of Deep Learning, as well as the concerns relating to cluster environments. Additionally, we provide case studies that evaluate and validate our implementation. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
207

A BCU scalable sensory acquisition system for EEG embedded applications

Unknown Date (has links)
Electroencephalogram (EEG) Recording has been through a lot of changes and modification since it was first introduced in 1929 due to rising technologies and signal processing advancements. The EEG Data acquisition stage is the first and most valuable component in any EEG recording System, it has the role of gathering and conditioning its input and outputting reliable data to be effectively analyzed and studied by digital signal processors using sophisticated and advanced algorithms which help in numerous medical and consumer applications. We have designed a low noise low power EEG data acquisition system that can be set to act as a standalone mobile EEG data processing unit providing data preprocessing functions; it can also be a very reliable high speed data acquisition interface to an EEG processing unit. / by Sherif S. Fathalla. / Thesis (M.S.C.S.)--Florida Atlantic University, 2010. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2010. Mode of access: World Wide Web.
208

Improved recurrent neural networks for convex optimization. / CUHK electronic theses & dissertations collection

January 2008 (has links)
Constrained optimization problems arise widely in scientific research and engineering applications. In the past two decades, solving optimization problems using recurrent neural network methods have been extensively investigated due to the advantages of massively parallel operations and rapid convergence. In real applications, neural networks with simple architecture and good performance are desired. However, most existing neural networks have some limitations and disadvantages in the convergence conditions or architecture complexity. This thesis is concentrated on analysis and design of recurrent neural networks with simplified architecture but for solving more general convex optimization problems. In this thesis, some improved recurrent neural networks have been proposed for solving smooth and non-smooth convex optimization problems and applied to some selected applications. / In Part I, we first propose a one-layer recurrent neural network for solving linear programming problems. Compared with other neural networks for linear programming, the proposed neural network has simpler architecture and better convergence properties. Second, a one-layer recurrent neural network is proposed for solving quadratic programming problems. The global convergence of the neural network can be guaranteed if only the objective function of the programming problem is convex on the equality constraints and not necessarily convex everywhere. Compared with the other neural networks for quadratic programming, such as the Lagrangian network and projection neural network, the proposed neural network has simpler architecture which neurons is the same as the number of the optimization problems. Third, combining the projection and penalty parameter methods, a one-layer recurrent neural network is proposed for solving general convex optimization problems with linear constraints. / In Part II, some improved recurrent neural networks are proposed for solving non-smooth convex optimization problems. We first proposed a one-layer recurrent neural network for solving the non-smooth convex programming problems with only equality constraints. This neural network simplifies the Lagrangian network and extend the neural network to solve non-smooth convex optimization problems. Then, a two-layers recurrent neural network is proposed for the non-smooth convex optimization subject to linear equality and bound constraints. / In Part III, some selected applications of the proposed neural networks are also discussed. The k-winners-take-all (kWTA) operation is first converted to equivalent linear and quadratic optimization problems and two kWTA network models are tailed to do the kWTA operation. Then, the proposed neural networks are applied to some other problems, such as the linear assignment, support vector machine learning and curve fitting problems. / Liu, Qingshan. / Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3606. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 133-145). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
209

Associative neural networks: properties, learning, and applications.

January 1994 (has links)
by Chi-sing Leung. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leaves 236-244). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background of Associative Neural Networks --- p.1 / Chapter 1.2 --- A Distributed Encoding Model: Bidirectional Associative Memory --- p.3 / Chapter 1.3 --- A Direct Encoding Model: Kohonen Map --- p.6 / Chapter 1.4 --- Scope and Organization --- p.9 / Chapter 1.5 --- Summary of Publications --- p.13 / Chapter I --- Bidirectional Associative Memory: Statistical Proper- ties and Learning --- p.17 / Chapter 2 --- Introduction to Bidirectional Associative Memory --- p.18 / Chapter 2.1 --- Bidirectional Associative Memory and its Encoding Method --- p.18 / Chapter 2.2 --- Recall Process of BAM --- p.20 / Chapter 2.3 --- Stability of BAM --- p.22 / Chapter 2.4 --- Memory Capacity of BAM --- p.24 / Chapter 2.5 --- Error Correction Capability of BAM --- p.28 / Chapter 2.6 --- Chapter Summary --- p.29 / Chapter 3 --- Memory Capacity and Statistical Dynamics of First Order BAM --- p.31 / Chapter 3.1 --- Introduction --- p.31 / Chapter 3.2 --- Existence of Energy Barrier --- p.34 / Chapter 3.3 --- Memory Capacity from Energy Barrier --- p.44 / Chapter 3.4 --- Confidence Dynamics --- p.49 / Chapter 3.5 --- Numerical Results from the Dynamics --- p.63 / Chapter 3.6 --- Chapter Summary --- p.68 / Chapter 4 --- Stability and Statistical Dynamics of Second order BAM --- p.70 / Chapter 4.1 --- Introduction --- p.70 / Chapter 4.2 --- Second order BAM and its Stability --- p.71 / Chapter 4.3 --- Confidence Dynamics of Second Order BAM --- p.75 / Chapter 4.4 --- Numerical Results --- p.82 / Chapter 4.5 --- Extension to higher order BAM --- p.90 / Chapter 4.6 --- Verification of the conditions of Newman's Lemma --- p.94 / Chapter 4.7 --- Chapter Summary --- p.95 / Chapter 5 --- Enhancement of BAM --- p.97 / Chapter 5.1 --- Background --- p.97 / Chapter 5.2 --- Review on Modifications of BAM --- p.101 / Chapter 5.2.1 --- Change of the encoding method --- p.101 / Chapter 5.2.2 --- Change of the topology --- p.105 / Chapter 5.3 --- Householder Encoding Algorithm --- p.107 / Chapter 5.3.1 --- Construction from Householder Transforms --- p.107 / Chapter 5.3.2 --- Construction from iterative method --- p.109 / Chapter 5.3.3 --- Remarks on HCA --- p.111 / Chapter 5.4 --- Enhanced Householder Encoding Algorithm --- p.112 / Chapter 5.4.1 --- Construction of EHCA --- p.112 / Chapter 5.4.2 --- Remarks on EHCA --- p.114 / Chapter 5.5 --- Bidirectional Learning --- p.115 / Chapter 5.5.1 --- Construction of BL --- p.115 / Chapter 5.5.2 --- The Convergence of BL and the memory capacity of BL --- p.116 / Chapter 5.5.3 --- Remarks on BL --- p.120 / Chapter 5.6 --- Adaptive Ho-Kashyap Bidirectional Learning --- p.121 / Chapter 5.6.1 --- Construction of AHKBL --- p.121 / Chapter 5.6.2 --- Convergent Conditions for AHKBL --- p.124 / Chapter 5.6.3 --- Remarks on AHKBL --- p.125 / Chapter 5.7 --- Computer Simulations --- p.126 / Chapter 5.7.1 --- Memory Capacity --- p.126 / Chapter 5.7.2 --- Error Correction Capability --- p.130 / Chapter 5.7.3 --- Learning Speed --- p.157 / Chapter 5.8 --- Chapter Summary --- p.158 / Chapter 6 --- BAM under Forgetting Learning --- p.160 / Chapter 6.1 --- Introduction --- p.160 / Chapter 6.2 --- Properties of Forgetting Learning --- p.162 / Chapter 6.3 --- Computer Simulations --- p.168 / Chapter 6.4 --- Chapter Summary --- p.168 / Chapter II --- Kohonen Map: Applications in Data compression and Communications --- p.170 / Chapter 7 --- Introduction to Vector Quantization and Kohonen Map --- p.171 / Chapter 7.1 --- Background on Vector quantization --- p.171 / Chapter 7.2 --- Introduction to LBG algorithm --- p.173 / Chapter 7.3 --- Introduction to Kohonen Map --- p.174 / Chapter 7.4 --- Chapter Summary --- p.179 / Chapter 8 --- Applications of Kohonen Map in Data Compression and Communi- cations --- p.181 / Chapter 8.1 --- Use Kohonen Map to design Trellis Coded Vector Quantizer --- p.182 / Chapter 8.1.1 --- Trellis Coded Vector Quantizer --- p.182 / Chapter 8.1.2 --- Trellis Coded Kohonen Map --- p.188 / Chapter 8.1.3 --- Computer Simulations --- p.191 / Chapter 8.2 --- Kohonen MapiCombined Vector Quantization and Modulation --- p.195 / Chapter 8.2.1 --- Impulsive Noise in the received data --- p.195 / Chapter 8.2.2 --- Combined Kohonen Map and Modulation --- p.198 / Chapter 8.2.3 --- Computer Simulations --- p.200 / Chapter 8.3 --- Error Control Scheme for the Transmission of Vector Quantized Data --- p.213 / Chapter 8.3.1 --- Motivation and Background --- p.214 / Chapter 8.3.2 --- Trellis Coded Modulation --- p.216 / Chapter 8.3.3 --- "Combined Vector Quantization, Error Control, and Modulation" --- p.220 / Chapter 8.3.4 --- Computer Simulations --- p.223 / Chapter 8.4 --- Chapter Summary --- p.226 / Chapter 9 --- Conclusion --- p.232 / Bibliography --- p.236
210

Soft self-organizing map.

January 1995 (has links)
by John Pui-fai Sum. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 99-104). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation --- p.1 / Chapter 1.2 --- Idea of SSOM --- p.3 / Chapter 1.3 --- Other Approaches --- p.3 / Chapter 1.4 --- Contribution of the Thesis --- p.4 / Chapter 1.5 --- Outline of Thesis --- p.5 / Chapter 2 --- Self-Organizing Map --- p.7 / Chapter 2.1 --- Introduction --- p.7 / Chapter 2.2 --- Algorithm of SOM --- p.8 / Chapter 2.3 --- Illustrative Example --- p.10 / Chapter 2.4 --- Property of SOM --- p.14 / Chapter 2.4.1 --- Convergence property --- p.14 / Chapter 2.4.2 --- Topological Order --- p.15 / Chapter 2.4.3 --- Objective Function of SOM --- p.15 / Chapter 2.5 --- Conclusion --- p.17 / Chapter 3 --- Algorithms for Soft Self-Organizing Map --- p.18 / Chapter 3.1 --- Competitive Learning and Soft Competitive Learning --- p.19 / Chapter 3.2 --- How does SOM generate ordered map? --- p.21 / Chapter 3.3 --- Algorithms of Soft SOM --- p.23 / Chapter 3.4 --- Simulation Results --- p.25 / Chapter 3.4.1 --- One dimensional map under uniform distribution --- p.25 / Chapter 3.4.2 --- One dimensional map under Gaussian distribution --- p.27 / Chapter 3.4.3 --- Two dimensional map in a unit square --- p.28 / Chapter 3.5 --- Conclusion --- p.30 / Chapter 4 --- Application to Uncover Vowel Relationship --- p.31 / Chapter 4.1 --- Experiment Set Up --- p.32 / Chapter 4.1.1 --- Network structure --- p.32 / Chapter 4.1.2 --- Training procedure --- p.32 / Chapter 4.1.3 --- Relationship Construction Scheme --- p.34 / Chapter 4.2 --- Results --- p.34 / Chapter 4.2.1 --- Hidden-unit labeling for SSOM2 --- p.34 / Chapter 4.2.2 --- Hidden-unit labeling for SOM --- p.35 / Chapter 4.3 --- Conclusion --- p.37 / Chapter 5 --- Application to vowel data transmission --- p.42 / Chapter 5.1 --- Introduction --- p.42 / Chapter 5.2 --- Simulation --- p.45 / Chapter 5.2.1 --- Setup --- p.45 / Chapter 5.2.2 --- Noise model and demodulation scheme --- p.46 / Chapter 5.2.3 --- Performance index --- p.46 / Chapter 5.2.4 --- Control experiment: random coding scheme --- p.46 / Chapter 5.3 --- Results --- p.47 / Chapter 5.3.1 --- Null channel noise (σ = 0) --- p.47 / Chapter 5.3.2 --- Small channel noise (0 ≤ σ ≤1) --- p.49 / Chapter 5.3.3 --- Large channel noise (1 ≤σ ≤7) --- p.49 / Chapter 5.3.4 --- Very large channel noise (σ > 7) --- p.49 / Chapter 5.4 --- Conclusion --- p.50 / Chapter 6 --- Convergence Analysis --- p.53 / Chapter 6.1 --- Kushner and Clark Lemma --- p.53 / Chapter 6.2 --- Condition for the Convergence of Jou's Algorithm --- p.54 / Chapter 6.3 --- Alternative Proof on the Convergence of Competitive Learning --- p.56 / Chapter 6.4 --- Convergence of Soft SOM --- p.58 / Chapter 6.5 --- Convergence of SOM --- p.60 / Chapter 7 --- Conclusion --- p.61 / Chapter 7.1 --- Limitations of SSOM --- p.62 / Chapter 7.2 --- Further Research --- p.63 / Chapter A --- Proof of Corollary1 --- p.65 / Chapter A.l --- Mean Average Update --- p.66 / Chapter A.2 --- Case 1: Uniform Distribution --- p.68 / Chapter A.3 --- Case 2: Logconcave Distribution --- p.70 / Chapter A.4 --- Case 3: Loglinear Distribution --- p.72 / Chapter B --- Different Senses of neighborhood --- p.79 / Chapter B.l --- Static neighborhood: Kohonen's sense --- p.79 / Chapter B.2 --- Dynamic neighborhood --- p.80 / Chapter B.2.1 --- Mou-Yeung Definition --- p.80 / Chapter B.2.2 --- Martinetz et al. Definition --- p.81 / Chapter B.2.3 --- Tsao-Bezdek-Pal Definition --- p.81 / Chapter B.3 --- Example --- p.82 / Chapter B.4 --- Discussion --- p.84 / Chapter C --- Supplementary to Chapter4 --- p.86 / Chapter D --- Quadrature Amplitude Modulation --- p.92 / Chapter D.l --- Amplitude Modulation --- p.92 / Chapter D.2 --- QAM --- p.93 / Bibliography --- p.99

Page generated in 0.0896 seconds