• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 4
  • Tagged with
  • 9
  • 9
  • 7
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

EMO - A Computational Emotional State Module : Emotions and their influence on the behaviour of autonomous agents

Esbjörnsson, Jimmy January 2007 (has links)
<p>Artificial intelligence (AI) is already a fundamental component of computer games. In this context is emotions a growing part in simulating real life. The proposed emotional state module, provides a way for the game agents to select an action in real-time virtual environments. The modules function has been tested with the open-source strategy game ORTS. This thesis proposes a new approach for the design of an interacting network, similar to a spreading activation system, of emotional states that keeps track of emotion intensities changing and interacting over time. The network of emotions can represent any number of persisting states, such as moods, emotions and drives. Any emotional signal can affect every state positively or negatively. The states' response to emotional signals are influenced by the other states represented in the network. The network is contained within an emotional state module. This interactions between emotions are not the focus of much research, neither is the representation model. The focus tend to be on the mechanisms eliciting emotions and on how to express the emotions.</p>
2

Hierarchical Temporal Memory Software Agent : In the light of general artificial intelligence criteria

Heyder, Jakob January 2018 (has links)
Artificial general intelligence is not well defined, but attempts such as the recent listof “Ingredients for building machines that think and learn like humans” are a startingpoint for building a system considered as such [1]. Numenta is attempting to lead thenew era of machine intelligence with their research to re-engineer principles of theneocortex. It is to be explored how the ingredients are in line with the design princi-ples of their algorithms. Inspired by Deep Minds commentary about an autonomy-ingredient, this project created a combination of Numentas Hierarchical TemporalMemory theory and Temporal Difference learning to solve simple tasks defined in abrowser environment. An open source software, based on Numentas intelligent com-puting platform NUPIC and Open AIs framework Universe, was developed to allowfurther research of HTM based agents on customized browser tasks. The analysisand evaluation of the results show that the agent is capable of learning simple tasksand there is potential for generalization inherent to sparse representations. However,they also reveal the infancy of the algorithms, not capable of learning dynamic com-plex problems, and that much future research is needed to explore if they can createscalable solutions towards a more general intelligent system.
3

EMO - A Computational Emotional State Module : Emotions and their influence on the behaviour of autonomous agents

Esbjörnsson, Jimmy January 2007 (has links)
Artificial intelligence (AI) is already a fundamental component of computer games. In this context is emotions a growing part in simulating real life. The proposed emotional state module, provides a way for the game agents to select an action in real-time virtual environments. The modules function has been tested with the open-source strategy game ORTS. This thesis proposes a new approach for the design of an interacting network, similar to a spreading activation system, of emotional states that keeps track of emotion intensities changing and interacting over time. The network of emotions can represent any number of persisting states, such as moods, emotions and drives. Any emotional signal can affect every state positively or negatively. The states' response to emotional signals are influenced by the other states represented in the network. The network is contained within an emotional state module. This interactions between emotions are not the focus of much research, neither is the representation model. The focus tend to be on the mechanisms eliciting emotions and on how to express the emotions.
4

Extração de preferências por meio de avaliações de comportamentos observados. / Preference elicitation using evaluation over observed behaviours.

Silva, Valdinei Freire da 07 April 2009 (has links)
Recentemente, várias tarefas tem sido delegadas a sistemas computacionais, principalmente quando sistemas computacionais são mais confiáveis ou quando as tarefas não são adequadas para seres humanos. O uso de extração de preferências ajuda a realizar a delegação, permitindo que mesmo pessoas leigas possam programar facilmente um sistema computacional com suas preferências. As preferências de uma pessoa são obtidas por meio de respostas para questões específicas, que são formuladas pelo próprio sistema computacional. A pessoa age como um usuário do sistema computacional, enquanto este é visto como um agente que age no lugar da pessoa. A estrutura e contexto das questões são apontadas como fonte de variações das respostas do usuário, e tais variações podem impossibilitar a factibilidade da extração de preferências. Uma forma de evitar tais variações é questionar um usuário sobre a sua preferência entre dois comportamentos observados por ele. A questão de avaliar relativamente comportamentos observados é mais simples e transparente ao usuário, diminuindo as possíveis variações, mas pode não ser fácil para o agente interpretar tais avaliações. Se existem divergências entre as percepções do agente e do usuário, o agente pode ficar impossibilitado de aprender as preferências do usuário. As avaliações são geradas com base nas percepções do usuário, mas tudo que um agente pode fazer é relacionar tais avaliações às suas próprias percepções. Um outro problema é que questões, que são expostas ao usuário por meio de comportamentos demonstrados, são agora restritas pela dinâmica do ambiente e um comportamento não pode ser escolhido arbitrariamente. O comportamento deve ser factível e uma política de ação deve ser executada no ambiente para que um comportamento seja demonstrado. Enquanto o primeiro problema influencia a inferência de como o usuário avalia comportamentos, o segundo problema influencia quão rápido e acurado o processo de aprendizado pode ser feito. Esta tese propõe o problema de Extração de Preferências com base em Comportamentos Observados utilizando o arcabouço de Processos Markovianos de Decisão, desenvolvendo propriedades teóricas em tal arcabouço que viabilizam computacionalmente tal problema. O problema de diferentes percepções é analisado e soluções restritas são desenvolvidas. O problema de demonstração de comportamentos é analisado utilizando formulação de questões com base em políticas estacionárias e replanejamento de políticas, sendo implementados algoritmos com ambas soluções para resolver a extração de preferências em um cenário sob condições restritas. / Recently, computer systems have been delegated to accomplish a variety of tasks, when the computer system can be more reliable or when the task is not suitable or not recommended for a human being. The use of preference elicitation in computational systems helps to improve such delegation, enabling lay people to program easily a computer system with their own preference. The preference of a person is elicited through his answers to specific questions, that the computer system formulates by itself. The person acts as an user of the computer system, whereas the computer system can be seen as an agent that acts in place of the person. The structure and context of the questions have been pointed as sources of variance regarding the users answers, and such variance can jeopardize the feasibility of preference elicitation. An attempt to avoid such variance is asking an user to choose between two behaviours that were observed by himself. Evaluating relatively observed behaviours turn questions more transparent and simpler for the user, decreasing the variance effect, but it might not be easier interpreting such evaluations. If divergences between agents and users perceptions occur, the agent may not be able to learn the users preference. Evaluations are generated regarding users perception, but all an agent can do is to relate such evaluation to his own perception. Another issue is that questions, which are exposed to the user through behaviours, are now constrained by the environment dynamics and a behaviour cannot be chosen arbitrarily, but the behaviour must be feasible and a policy must be executed in order to achieve a behaviour. Whereas the first issue influences the inference regarding users evaluation, the second problem influences how fast and accurate the learning process can be made. This thesis proposes the problem of Preference Elicitation under Evaluations over Observed Behaviours using the Markov Decision Process framework and theoretic properties in such framework are developed in order to turn such problem computationally feasible. The problem o different perceptions is analysed and constraint solutions are developed. The problem of demonstrating a behaviour is considered under the formulation of question based on stationary policies and non-stationary policies. Both type of questions was implemented and tested to solve the preference elicitation in a scenario with constraint conditions.
5

Extração de preferências por meio de avaliações de comportamentos observados. / Preference elicitation using evaluation over observed behaviours.

Valdinei Freire da Silva 07 April 2009 (has links)
Recentemente, várias tarefas tem sido delegadas a sistemas computacionais, principalmente quando sistemas computacionais são mais confiáveis ou quando as tarefas não são adequadas para seres humanos. O uso de extração de preferências ajuda a realizar a delegação, permitindo que mesmo pessoas leigas possam programar facilmente um sistema computacional com suas preferências. As preferências de uma pessoa são obtidas por meio de respostas para questões específicas, que são formuladas pelo próprio sistema computacional. A pessoa age como um usuário do sistema computacional, enquanto este é visto como um agente que age no lugar da pessoa. A estrutura e contexto das questões são apontadas como fonte de variações das respostas do usuário, e tais variações podem impossibilitar a factibilidade da extração de preferências. Uma forma de evitar tais variações é questionar um usuário sobre a sua preferência entre dois comportamentos observados por ele. A questão de avaliar relativamente comportamentos observados é mais simples e transparente ao usuário, diminuindo as possíveis variações, mas pode não ser fácil para o agente interpretar tais avaliações. Se existem divergências entre as percepções do agente e do usuário, o agente pode ficar impossibilitado de aprender as preferências do usuário. As avaliações são geradas com base nas percepções do usuário, mas tudo que um agente pode fazer é relacionar tais avaliações às suas próprias percepções. Um outro problema é que questões, que são expostas ao usuário por meio de comportamentos demonstrados, são agora restritas pela dinâmica do ambiente e um comportamento não pode ser escolhido arbitrariamente. O comportamento deve ser factível e uma política de ação deve ser executada no ambiente para que um comportamento seja demonstrado. Enquanto o primeiro problema influencia a inferência de como o usuário avalia comportamentos, o segundo problema influencia quão rápido e acurado o processo de aprendizado pode ser feito. Esta tese propõe o problema de Extração de Preferências com base em Comportamentos Observados utilizando o arcabouço de Processos Markovianos de Decisão, desenvolvendo propriedades teóricas em tal arcabouço que viabilizam computacionalmente tal problema. O problema de diferentes percepções é analisado e soluções restritas são desenvolvidas. O problema de demonstração de comportamentos é analisado utilizando formulação de questões com base em políticas estacionárias e replanejamento de políticas, sendo implementados algoritmos com ambas soluções para resolver a extração de preferências em um cenário sob condições restritas. / Recently, computer systems have been delegated to accomplish a variety of tasks, when the computer system can be more reliable or when the task is not suitable or not recommended for a human being. The use of preference elicitation in computational systems helps to improve such delegation, enabling lay people to program easily a computer system with their own preference. The preference of a person is elicited through his answers to specific questions, that the computer system formulates by itself. The person acts as an user of the computer system, whereas the computer system can be seen as an agent that acts in place of the person. The structure and context of the questions have been pointed as sources of variance regarding the users answers, and such variance can jeopardize the feasibility of preference elicitation. An attempt to avoid such variance is asking an user to choose between two behaviours that were observed by himself. Evaluating relatively observed behaviours turn questions more transparent and simpler for the user, decreasing the variance effect, but it might not be easier interpreting such evaluations. If divergences between agents and users perceptions occur, the agent may not be able to learn the users preference. Evaluations are generated regarding users perception, but all an agent can do is to relate such evaluation to his own perception. Another issue is that questions, which are exposed to the user through behaviours, are now constrained by the environment dynamics and a behaviour cannot be chosen arbitrarily, but the behaviour must be feasible and a policy must be executed in order to achieve a behaviour. Whereas the first issue influences the inference regarding users evaluation, the second problem influences how fast and accurate the learning process can be made. This thesis proposes the problem of Preference Elicitation under Evaluations over Observed Behaviours using the Markov Decision Process framework and theoretic properties in such framework are developed in order to turn such problem computationally feasible. The problem o different perceptions is analysed and constraint solutions are developed. The problem of demonstrating a behaviour is considered under the formulation of question based on stationary policies and non-stationary policies. Both type of questions was implemented and tested to solve the preference elicitation in a scenario with constraint conditions.
6

Task-oriented communicative capabilities of agents in collaborative virtual environments for training / Des agents avec des capacités communicatives orientées tâche dans les environnements de réalité virtuelle collaboratifs pour l'apprentissage

Barange, Mukesh 12 March 2015 (has links)
Les besoins croissants en formation et en entrainement au travail d’équipe ont motivé l’utilisationd’Environnements de réalité Virtuelle Collaboratifs de Formation (EVCF) qui permettent aux utilisateurs de travailler avec des agents autonomes pour réaliser une activité collective. L’idée directrice est que la coordination efficace entre les membres d’une équipe améliore la productivité et réduit les erreurs individuelles et collectives. Cette thèse traite de la mise en place et du maintien de la coordination au sein d’une équipe de travail composée d’agents et d’humains interagissant dans un EVCF.L’objectif de ces recherches est de doter les agents virtuels de comportements conversationnels permettant la coopération entre agents et avec l’utilisateur dans le but de réaliser un but commun.Nous proposons une architecture d’agents Collaboratifs et Conversationnels, dérivée de l’architecture Belief-Desire-Intention (C2-BDI), qui gère uniformément les comportements délibératifs et conversationnels comme deux comportements dirigés vers les buts de l’activité collective. Nous proposons un modèle intégré de la coordination fondé sur l’approche des modèles mentaux partagés, afin d’établir la coordination au sein de l’équipe de travail composée d’humains et d’agents. Nous soutenons que les interactions en langage naturel entre les membres d’une équipe modifient les modèles mentaux individuels et partagés des participants. Enfin, nous décrivons comment les agents mettent en place et maintiennent la coordination au sein de l’équipe par le biais de conversations en langage naturel. Afin d’établir un couplage fort entre la prise de décision et le comportement conversationnel collaboratif d’un agent, nous proposons tout d’abord une approche fondée sur la modélisation sémantique des activités humaines et de l’environnement virtuel via le modèle mascaret puis, dans un second temps, une modélisation du contexte basée sur l’approche Information State. Ces représentations permettent de traiter de manière unifiée les connaissances sémantiques des agents sur l’activité collective et sur l’environnement virtuel ainsi que des informations qu’ils échangent lors de dialogues.Ces informations sont utilisées par les agents pour la génération et la compréhension du langage naturel multipartite. L’approche Information State nous permet de doter les agents C2BDI de capacités communicatives leur permettant de s’engager pro-activement dans des interactions en langue naturelle en vue de coordonner efficacement leur activité avec les autres membres de l’équipe. De plus, nous définissons les protocoles conversationnels collaboratifs favorisant la coordination entre les membres de l’équipe. Enfin, nous proposons dans cette thèse un mécanisme de prise de décision s’inspirant de l’approche BDI qui lie les comportements de délibération et de conversation des agents. Nous avons mis en oeuvre notre architecture dans trois différents scénarios se déroulant dans des EVCF. Nous montrons que les comportements conversationnels collaboratifs multipartites des agents C2BDI facilitent la coordination effective de l’utilisateur avec les autres membres de l’équipe lors de la réalisation d’une tâche partagée. / Growing needs of educational and training requirements motivate the use of collaborative virtual environments for training (CVET) that allows human users to work together with autonomous agents to perform a collective activity. The vision is inspired by the fact that the effective coordination improves productivity, and reduces the individual and team errors. This work addresses the issue of establishing and maintaining the coordination in a mixed human-agent teamwork in the context of CVET. The objective of this research is to provide human-like conversational behavior of the virtual agents in order to cooperate with a user and other agents to achieve shared goals.We propose a belief-desire-intention (BDI) like Collaborative Conversational agent architecture(C2BDI) that treats both deliberative and conversational behaviors uniformly as guided by the goal-directed shared activity. We put forward an integrated model of coordination which is founded on the shared mental model based approaches to establish coordination in a human-agent teamwork. We argue that natural language interaction between team members can affect and modify the individual and shared mental models of the participants. Finally, we describe the cultivation of coordination in a mixed human-agent teamwork through natural language conversation. In order to establish the strong coupling between decision making and the collaborative conversational behavior of the agent, we propose first, the Mascaret based semantic modeling of human activities and the VE, and second, the information state based context model. This representation allows the treatment of semantic knowledge of the collaborative activity and virtual environment, and information exchanged during the dialogue conversation in a unified manner. This knowledge can be used by the agent for multiparty natural language processing (understanding and generation) in the context of the CEVT. To endow the communicative capabilities to C2BDI agent, we put forward the information state based approach for the natural language processing of the utterances. We define collaborative conversation protocols that ensure the coordination between team members. Finally, in this thesis, we propose a decision making mechanism, which is inspired by the BDI based approach and provides the interleaving between deliberation and conversational behavior of the agent. We have applied the proposed architecture to three different scenarios in the CVET. We found that the multiparty collaborative conversational behavior of C2BDI agent is more constructive and facilitates the user to effectively coordinate with other team members to perform a shared task.
7

Homeostatic Plasticity in Input-Driven Dynamical Systems

Toutounji, Hazem 26 February 2015 (has links)
The degree by which a species can adapt to the demands of its changing environment defines how well it can exploit the resources of new ecological niches. Since the nervous system is the seat of an organism's behavior, studying adaptation starts from there. The nervous system adapts through neuronal plasticity, which may be considered as the brain's reaction to environmental perturbations. In a natural setting, these perturbations are always changing. As such, a full understanding of how the brain functions requires studying neuronal plasticity under temporally varying stimulation conditions, i.e., studying the role of plasticity in carrying out spatiotemporal computations. It is only then that we can fully benefit from the full potential of neural information processing to build powerful brain-inspired adaptive technologies. Here, we focus on homeostatic plasticity, where certain properties of the neural machinery are regulated so that they remain within a functionally and metabolically desirable range. Our main goal is to illustrate how homeostatic plasticity interacting with associative mechanisms is functionally relevant for spatiotemporal computations. The thesis consists of three studies that share two features: (1) homeostatic and synaptic plasticity act on a dynamical system such as a recurrent neural network. (2) The dynamical system is nonautonomous, that is, it is subject to temporally varying stimulation. In the first study, we develop a rigorous theory of spatiotemporal representations and computations, and the role of plasticity. Within the developed theory, we show that homeostatic plasticity increases the capacity of the network to encode spatiotemporal patterns, and that synaptic plasticity associates these patterns to network states. The second study applies the insights from the first study to the single node delay-coupled reservoir computing architecture, or DCR. The DCR's activity is sampled at several computational units. We derive a homeostatic plasticity rule acting on these units. We analytically show that the rule balances between the two necessary processes for spatiotemporal computations identified in the first study. As a result, we show that the computational power of the DCR significantly increases. The third study considers minimal neural control of robots. We show that recurrent neural control with homeostatic synaptic dynamics endows the robots with memory. We show through demonstrations that this memory is necessary for generating behaviors like obstacle-avoidance of a wheel-driven robot and stable hexapod locomotion.
8

Um agente autônomo baseado em aprendizagem por reforço direcionado à meta / An autonomous agent based on goal-directed reinforcement learning

Braga, Arthur Plínio de Souza 16 December 1998 (has links)
Uma meta procurada em inteligência artificial (IA) é o desenvolvimento de mecanismos inteligentes capazes de cumprir com objetivos preestabelecidos, de forma totalmente independente, em ambientes dinâmicos e complexos. Uma recente vertente das pesquisas em IA, os agentes autônomos, vem conseguindo resultados cada vez mais promissores para o cumprimento desta meta. A motivação deste trabalho é a proposição e implementação de um agente que aprenda a executar tarefas, sem a interferência de um tutor, em um ambiente não estruturado. A tarefa prática proposta para testar o agente é a navegação de um robô móvel em ambientes com diferentes configurações, e cujas estruturas são inicialmente desconhecidas pelo agente. O paradigma de aprendizagem por reforço, através de variações dos métodos de diferença temporal, foi utilizado para implementar o agente descrito nesta pesquisa. O resultado final obtido foi um agente autônomo que utiliza um algoritmo simples para desempenhar propriedades como: aprendizagem a partir de tabula rasa, aprendizagem incremental, planejamento deliberativo, comportamento reativo, capacidade de melhoria do desempenho e habilidade para gerenciar múltiplos objetivos. O agente proposto também apresenta um desempenho promissor em ambientes cuja estrutura se altera com o tempo, porém diante de certas situações seus comportamentos em tais ambientes tendem a se tornar inconsistentes. / One of the current goals of research in Artificial Intelligence is the proposition of intelligent entities that are able to reach a particular target in a dynamic and complex environment without help of a tutor. This objective has been becoming reality through the propositions of the autonomous agents. Thus, the main motivation of this work is to propose and implement an autonomous agent that can match the mentioned goals. This agent, a mobile robot, has to navigate in environments which are initially unknown and may have different structures. The agent learns through one of the main reinforcement learning strategies: temporal difference. The proposed autonomous employs a simple learning mechanisms with the following features: learns incrementally from tabula rasa, executes deliberative and reactive planning, improves its performance through interactions with the environment, and manages multiple objectives. The agent presented promising results when moving in a dynamic environment. However, there are situations in which the agent do not follow this last property.
9

Um agente autônomo baseado em aprendizagem por reforço direcionado à meta / An autonomous agent based on goal-directed reinforcement learning

Arthur Plínio de Souza Braga 16 December 1998 (has links)
Uma meta procurada em inteligência artificial (IA) é o desenvolvimento de mecanismos inteligentes capazes de cumprir com objetivos preestabelecidos, de forma totalmente independente, em ambientes dinâmicos e complexos. Uma recente vertente das pesquisas em IA, os agentes autônomos, vem conseguindo resultados cada vez mais promissores para o cumprimento desta meta. A motivação deste trabalho é a proposição e implementação de um agente que aprenda a executar tarefas, sem a interferência de um tutor, em um ambiente não estruturado. A tarefa prática proposta para testar o agente é a navegação de um robô móvel em ambientes com diferentes configurações, e cujas estruturas são inicialmente desconhecidas pelo agente. O paradigma de aprendizagem por reforço, através de variações dos métodos de diferença temporal, foi utilizado para implementar o agente descrito nesta pesquisa. O resultado final obtido foi um agente autônomo que utiliza um algoritmo simples para desempenhar propriedades como: aprendizagem a partir de tabula rasa, aprendizagem incremental, planejamento deliberativo, comportamento reativo, capacidade de melhoria do desempenho e habilidade para gerenciar múltiplos objetivos. O agente proposto também apresenta um desempenho promissor em ambientes cuja estrutura se altera com o tempo, porém diante de certas situações seus comportamentos em tais ambientes tendem a se tornar inconsistentes. / One of the current goals of research in Artificial Intelligence is the proposition of intelligent entities that are able to reach a particular target in a dynamic and complex environment without help of a tutor. This objective has been becoming reality through the propositions of the autonomous agents. Thus, the main motivation of this work is to propose and implement an autonomous agent that can match the mentioned goals. This agent, a mobile robot, has to navigate in environments which are initially unknown and may have different structures. The agent learns through one of the main reinforcement learning strategies: temporal difference. The proposed autonomous employs a simple learning mechanisms with the following features: learns incrementally from tabula rasa, executes deliberative and reactive planning, improves its performance through interactions with the environment, and manages multiple objectives. The agent presented promising results when moving in a dynamic environment. However, there are situations in which the agent do not follow this last property.

Page generated in 0.444 seconds