• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • Tagged with
  • 8
  • 8
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

VisuNet: Visualizing Networks of feature interactions in rule-based classifiers

Anyango, Stephen Omondi Otieno January 2016 (has links)
No description available.
2

Learning hierarchical decomposition rules for planning : an inductive logic programming approach /

Reddy, Chandrasekhara K. January 1900 (has links)
Thesis (Ph. D.)--Oregon State University, 1999. / Typescript (photocopy). Includes bibliographical references (leaves 123-129). Also available on the World Wide Web.
3

Applying machine learning techniques to rule generation in intelligent tutoring systems

Jarvis, Matthew P. January 2004 (has links)
Thesis (M.S.)--Worcester Polytechnic Institute. / Keywords: Intelligent Tutoring Systems; Model Tracing; Machine Learning; Artificial Intelligence; Programming by Demonstration. Includes bibliographical references.
4

Rede neural recorrente com perturbação simultânea aplicada no problema do caixeiro viajante / Recurrent neural network with simultaneous perturbation applied to traveling salesman problem

Fabriciu Alarcão Veiga Benini 15 December 2008 (has links)
O presente trabalho propõe resolver o clássico problema combinatorial conhecido como problema do caixeiro viajante. Foi usado no sistema de otimização de busca do menor caminho uma rede neural recorrente. A topologia de estrutura de ligação das realimentações da rede adotada aqui é conhecida por rede recorrente de Wang. Como regra de treinamento de seus pesos sinápticos foi adotada a técnica de perturbação simultânea com aproximação estocástica. Foi elaborado ainda uma minuciosa revisão bibliográfica sobre todos os temas abordados com detalhes sobre a otimização multivariável com perturbação simultânea. Comparar-se-á também os resultados obtidos aqui com outras diferentes técnicas aplicadas no problema do caixeiro viajante visando propósitos de validação. / This work proposes to solve the classic combinatorial optimization problem known as traveling salesman problem. A recurrent neural network was used in the system of optimization to search the shorter path. The structural topology linking the feedbacks of the network adopted here is known by Wang recurrent network. As learning rule to find the appropriate values of the weights was used the simultaneous perturbation with stochastic approximation. A detailed bibliographical revision on multivariable optimization with simultaneous perturbation is also described. Comparative results with other different techniques applied to the traveling salesman are still presented for validation purposes.
5

Rede neural recorrente com perturbação simultânea aplicada no problema do caixeiro viajante / Recurrent neural network with simultaneous perturbation applied to traveling salesman problem

Benini, Fabriciu Alarcão Veiga 15 December 2008 (has links)
O presente trabalho propõe resolver o clássico problema combinatorial conhecido como problema do caixeiro viajante. Foi usado no sistema de otimização de busca do menor caminho uma rede neural recorrente. A topologia de estrutura de ligação das realimentações da rede adotada aqui é conhecida por rede recorrente de Wang. Como regra de treinamento de seus pesos sinápticos foi adotada a técnica de perturbação simultânea com aproximação estocástica. Foi elaborado ainda uma minuciosa revisão bibliográfica sobre todos os temas abordados com detalhes sobre a otimização multivariável com perturbação simultânea. Comparar-se-á também os resultados obtidos aqui com outras diferentes técnicas aplicadas no problema do caixeiro viajante visando propósitos de validação. / This work proposes to solve the classic combinatorial optimization problem known as traveling salesman problem. A recurrent neural network was used in the system of optimization to search the shorter path. The structural topology linking the feedbacks of the network adopted here is known by Wang recurrent network. As learning rule to find the appropriate values of the weights was used the simultaneous perturbation with stochastic approximation. A detailed bibliographical revision on multivariable optimization with simultaneous perturbation is also described. Comparative results with other different techniques applied to the traveling salesman are still presented for validation purposes.
6

Difference target propagation

Lee, Dong-Hyun 07 1900 (has links)
No description available.
7

Neurodynamische Module zur Bewegungssteuerung autonomer mobiler Roboter

Hild, Manfred 07 January 2008 (has links)
In der vorliegenden Arbeit werden rekurrente neuronale Netze im Hinblick auf ihre Eignung zur Bewegungssteuerung autonomer Roboter untersucht. Nacheinander werden Oszillatoren für Vierbeiner, homöostatische Ringmodule für segmentierte Roboter und monostabile Neuromodule für Roboter mit vielen Freiheitsgraden und komplexen Bewegungsabläufen besprochen. Neben dem mathematisch-theoretischen Hintergrund der Neuromodule steht in gleichberechtigter Weise deren praktische Implementierung auf realen Robotersystemen. Hierzu wird die funktionale Einbettung ins Gesamtsystem ebenso betrachtet, wie die konkreten Aspekte der zugrundeliegenden Hardware: Rechengenauigkeit, zeitliche Auflösung, Einfluss verwendeter Materialien und dergleichen mehr. Interessante elektronische Schaltungsprinzipien werden detailliert besprochen. Insgesamt enthält die vorliegende Arbeit alle notwendigen theoretischen und praktischen Informationen, um individuelle Robotersysteme mit einer angemessenen Bewegungssteuerung zu versehen. Ein weiteres Anliegen der Arbeit ist es, aus der Richtung der klassischen Ingenieurswissenschaften kommend, einen neuen Zugang zur Theorie rekurrenter neuronaler Netze zu schaffen. Gezielte Vergleiche der Neuromodule mit analogen elektronischen Schaltungen, physikalischen Modellen und Algorithmen aus der digitalen Signalverarbeitung können das Verständnis von Neurodynamiken erleichtern. / How recurrent neural networks can help to make autonomous robots move, will be investigated within this thesis. First, oscillators which are able to control four-legged robots will be dealt with, then homeostatic ring modules which control segmented robots, and finally monostable neural modules, which are able to drive complex motion sequences on robots with many degrees of freedom will be focused upon. The mathematical theory of neural modules will be addressed as well as their practical implementation on real robot platforms. This includes their embedding into a major framework and concrete aspects, like computational accuracy, timing and dependance on materials. Details on electronics will be given, so that individual robot systems can be built and equipped with an appropriate motion controller. It is another concern of this thesis, to shed a new light on the theory of recurrent neural networks, from the perspective of classical engineering science. Selective comparisons to analog electronic schematics, physical models, and digital signal processing algorithms can ease the understanding of neural dynamics.
8

A deep learning theory for neural networks grounded in physics

Scellier, Benjamin 12 1900 (has links)
Au cours de la dernière décennie, l'apprentissage profond est devenu une composante majeure de l'intelligence artificielle, ayant mené à une série d'avancées capitales dans une variété de domaines. L'un des piliers de l'apprentissage profond est l'optimisation de fonction de coût par l'algorithme du gradient stochastique (SGD). Traditionnellement en apprentissage profond, les réseaux de neurones sont des fonctions mathématiques différentiables, et les gradients requis pour l'algorithme SGD sont calculés par rétropropagation. Cependant, les architectures informatiques sur lesquelles ces réseaux de neurones sont implémentés et entraînés souffrent d’inefficacités en vitesse et en énergie, dues à la séparation de la mémoire et des calculs dans ces architectures. Pour résoudre ces problèmes, le neuromorphique vise à implementer les réseaux de neurones dans des architectures qui fusionnent mémoire et calculs, imitant plus fidèlement le cerveau. Dans cette thèse, nous soutenons que pour construire efficacement des réseaux de neurones dans des architectures neuromorphiques, il est nécessaire de repenser les algorithmes pour les implémenter et les entraîner. Nous présentons un cadre mathématique alternative, compatible lui aussi avec l’algorithme SGD, qui permet de concevoir des réseaux de neurones dans des substrats qui exploitent mieux les lois de la physique. Notre cadre mathématique s'applique à une très large classe de modèles, à savoir les systèmes dont l'état ou la dynamique sont décrits par des équations variationnelles. La procédure pour calculer les gradients de la fonction de coût dans de tels systèmes (qui dans de nombreux cas pratiques ne nécessite que de l'information locale pour chaque paramètre) est appelée “equilibrium propagation” (EqProp). Comme beaucoup de systèmes en physique et en ingénierie peuvent être décrits par des principes variationnels, notre cadre mathématique peut potentiellement s'appliquer à une grande variété de systèmes physiques, dont les applications vont au delà du neuromorphique et touchent divers champs d'ingénierie. / In the last decade, deep learning has become a major component of artificial intelligence, leading to a series of breakthroughs across a wide variety of domains. The workhorse of deep learning is the optimization of loss functions by stochastic gradient descent (SGD). Traditionally in deep learning, neural networks are differentiable mathematical functions, and the loss gradients required for SGD are computed with the backpropagation algorithm. However, the computer architectures on which these neural networks are implemented and trained suffer from speed and energy inefficiency issues, due to the separation of memory and processing in these architectures. To solve these problems, the field of neuromorphic computing aims at implementing neural networks on hardware architectures that merge memory and processing, just like brains do. In this thesis, we argue that building large, fast and efficient neural networks on neuromorphic architectures also requires rethinking the algorithms to implement and train them. We present an alternative mathematical framework, also compatible with SGD, which offers the possibility to design neural networks in substrates that directly exploit the laws of physics. Our framework applies to a very broad class of models, namely those whose state or dynamics are described by variational equations. This includes physical systems whose equilibrium state minimizes an energy function, and physical systems whose trajectory minimizes an action functional (principle of least action). We present a simple procedure to compute the loss gradients in such systems, called equilibrium propagation (EqProp), which requires solely locally available information for each trainable parameter. Since many models in physics and engineering can be described by variational principles, our framework has the potential to be applied to a broad variety of physical systems, whose applications extend to various fields of engineering, beyond neuromorphic computing.

Page generated in 0.0867 seconds