• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 5
  • 5
  • Tagged with
  • 18
  • 18
  • 9
  • 8
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An automatic learning of grammar for syntactic pattern recognition

Ofori, Paul 01 May 1988 (has links)
The practical utility of a syntactic pattern recognizer depends on an automatic learning of pattern class grammars from a sample of patterns. The basic idea is to devise a learning process based on induction of repeated subs rings. Several techniques based on formal lattice structures, structural derivatives, information, k – tails, lattice structures, structural information sequence, inductive inference and heuristic approach are widely found in the literature. The purpose of this research is to first devise a minimal finite state automaton which recognizes all patterns. The automaton is then manipulated so that the induction of repetition is captured by cycles or loops. The final phase consists of converting the reduced automaton into a context - free grammar. Now, an automatic parser for this grammar can recognize patterns which are in the respective class.
2

[en] HIERARQUICAL NEURO-FUZZY MODELS BASED ON REINFORCEMENT LEARNING FOR INTELLIGENT AGENTS / [pt] NOVOS MODELOS NEURO-FUZZY HIERÁRQUICOS COM APRENDIZADO POR REFORÇO PARA AGENTES INTELIGENTES

KARLA TEREZA FIGUEIREDO LEITE 21 July 2003 (has links)
[pt] Esta tese investiga modelos híbridos neuro-fuzzy para aprendizado automático de ações efetuadas por agentes. O objetivo dos modelos é dotar um agente de inteligência, tornando-o capaz de, através da interação com o seu ambiente, adquirir e armazenar o conhecimento e raciocinar (inferir uma ação). O aprendizado desses modelos é realizado através de processo não-supervisionado denominado Aprendizado por Reforço (RL: Reinforcement Learning). Esta nova proposta de modelos neuro-fuzzy apresenta as seguintes características: aprendizado automático da estrutura do modelo; auto-ajuste dos parâmetros associados à estrutura; capacidade de aprender a ação a ser tomada quando o agente está em um determinado estado do ambiente; possibilidade de lidar com um número maior de entradas do que os sistemas neuro-fuzzy tradicionais; e geração de regras lingüísticas com hierarquia. O trabalho envolveu três etapas principais: levantamento bibliográfico e estudo de modelos de aprendizado; definição e implementação de dois novos modelos neuro-fuzzy hierárquicos baseados em RL; e estudo de casos. O levantamento bibliográfico e o estudo de modelos de aprendizado foi feito a partir dos modelos usados em agentes (com o objetivo de ampliar a ação autônoma) e em espaço de estados grande e/ou contínuo. A definição dos dois novos modelos neuro-fuzzy foi motivada pela importância de se estender a capacidade autônoma de agentes através do quesito inteligência, em particular a capacidade de aprendizado. Os modelos foram concebidos a partir do estudo das limitações existentes nos modelos atuais e das características desejáveis para sistemas de aprendizado baseados em RL, em particular quando aplicados a ambientes contínuos e/ou ambientes considerados de grande dimensão. Tais ambientes apresentam uma característica denominada curse of dimensionality que inviabiliza a aplicação direta de métodos tradicionais de RL. Assim sendo, a decisão de se usar uma metodologia de particionamento recursivo, já explorada com excelentes resultados em Souza (1999), que reduz significativamente as limitações dos sistemas neuro-fuzzy existentes, foi de fundamental importância para este trabalho. Optou-se pelos particionamentos BSP e Quadtree/Politree, gerando os dois modelos RL-NFHB (Reinforcement Learning - Neuro-Fuzzy Hierárquico BSP) e RL-NFHP (Reinforcement Learning - Neuro-Fuzzy Hierárquico Politree). Estes dois novos modelos são derivados dos modelos neuro-fuzzy hierárquicos NFHB e NFHQ (Souza, 1999) que utilizam aprendizado supervisionado. Com o uso desses métodos de particionamento, associados ao Reinforcement Learning, obteve-se uma nova classe de Sistemas Neuro-Fuzzy (SNF) que executam, além do aprendizado da estrutura, o aprendizado autônomo das ações a serem tomadas por um agente. Essas características representam um importante diferencial em relação aos sistemas de aprendizado de agentes inteligentes existentes. No estudo de casos, os dois modelos foram testados em 3 aplicações benckmark e uma aplicação em robótica. As aplicações benchmark são referentes a 3 problemas de sistemas de controle: o carro na montanha (mountain cart problem), estacionamento do carro (cart-centering problem) e o pêndulo invertido. A aplicação em robótica utilizou o modelo Khepera. A implementação dos modelos RL-NFHB e RL- NFHP foi feita em linguagem Java em microcomputadores com plataforma Windows 2000. Os testes efetuados demonstram que estes novos modelos se ajustam bem a problemas de sistemas de controle e robótica, apresentando boa generalização e gerando sua própria estrutura hierárquica de regras com interpretação lingüística. Além disso, o aprendizado automático do ambiente dota o agente de inteligência - (base de conhecimento, raciocínio e aprendizado), característica que aumenta a capacidade autônoma deste agente. A área de sistemas neuro-fuzzy hie / [en] This thesis investigates neuro-fuzzy hybrid models for automatic learning of actions taken by agents. The objective of these models is to provide an agent with intelligence, making it capable of acquiring and retaining knowledge and of reasoning (infer an action) by interacting with its environment. Learning in these models is performed by a non-supervised process, called Reinforcement Learning. These novel neuro-fuzzy models have the following characteristics: automatic learning of the model structure; auto-adjustment of parameters associated with the structure; capability of learning the action to be taken when the agent is on a given environment state; possibility of dealing with a larger number of inputs than those of traditional neuro-fuzzy systems; and the generation of hierarchical linguistic rules. This work comprised three main stages: bibliographic survey and study of learning models; definition and implementation of two new hierarchical neurofuzzy models based on Reinforcement Learning; and case studies. The bibliographic survey and the study of learning models considered learning models employed in agents (aiming to enhance the autonomous action) and in large and/or continuous state spaces. The definition of the two new neuro-fuzzy models was motivated by the importance of extending the autonomous capacity of agents through its intelligence, particularly the learning capacity. The models were conceived from the study of the existing limitations in current models, as well as the desirable characteristics for RL-based learning systems, particularly, when applied to continuous and/or high dimension environments. These environments present a characteristic called curse of dimensionality, which makes impracticable the direct application of the traditional RL- methods. Therefore, the decision of using a recursive partitioning methodology (already explored with excellent results in Souza, 1999), which significantly reduces the existing neuro-fuzzy systems limitations, was crucial to this work. The BSP (Binary Space Partitioning) and the Quadtree/Politree partitioning were then chosen, generating the RL-NFHB (Reinforcement Learning - Hierarchical Neuro- Fuzzy BSP) and RL-NFHP (Reinforcement Learning - Hierarchical Neuro-Fuzzy Politree) models. These two new models are derived from the hierarchical neuro-fuzzy models NFHB and NFHQ (Souza, 1999), which use supervised learning. By using these partitioning methods, together with the Reinforcement Learning methodology, a new class of Neuro-Fuzzy Systems (SNF) was obtained, which executes, in addition to structure learning, the autonomous learning of the actions to be taken by an agent. These characteristics represent an important differential when compared to the existing intelligent agents learning systems. In the case studies, the two models were tested in three benchmark applications and one application in robotics. The benchmark applications refer to 3 problems of control systems : the mountain cart problem, cart-centering problem, and the inverted pendulum. The application in robotics made use of the Khepera model. The RL-NFHB and RL-NFHP models were implemented using the Java language in Windows 2000 platform microcomputers. The experiments demonstrate that these new models are suitable for problems of control systems and robotics, presenting a good generalization and generating their own hierarchical structure of rules with linguistic interpretation. Moreover, the automatic environment learning endows the agent with intelligence (knowledge base, reasoning and learning). These are characteristics that increase the autonomous capacity of this agent. The hierarchical neuro-fuzzy systems field was also enhanced by the introduction of reinforcement learning, allowing the learning of hierarchical rules and actions to take place within the same process.
3

Machine Learning Strategies for Large-scale Taxonomies / Strategies d'apprentissage pour la classification dans les grandes taxonomies

Babbar, Rohit 17 October 2014 (has links)
À l'ère de Big Data, le développement de modèles d'apprentissage machine efficaces et évolutifs opérant sur des Tera-Octets de données est une nécessité. Dans cette thèse, nous étudions un cadre d'apprentissage machine pour la classification hiérarchique à large échelle. Cette analyse comprend l'étude des défis comme la complexité d'entraînement des modèles ainsi que leur temps de prédiction. Dans la première partie de la thèse, nous étudions la distribution des lois de puissance sous-jacente à la création des taxonomies à grande échelle. Cette étude permet de dériver des bornes sur la complexité spatiale des classifieurs hiérarchiques. L'exploitation de ce résultat permet alors le développement des modèles efficaces pour les classes distribuées selon une loi de puissance. Nous proposons également une méthode efficace pour la sélection de modèles pour des classifieurs multi-classes de type séparateurs à vaste marge ou de la régression logistique. Dans une deuxième partie, nous étudions le problème de la classification hiérarichique contre la classification plate d'un point de vue théorique. Nous dérivons une borne sur l'erreur de généralisation qui permet de définir les cas où la classification hiérarchique serait plus avantageux que la classification plate. Nous exploitons en outre les bornes développées pour proposer deux méthodes permettant adapter une taxonomie donnée de catégories à une taxonomies de sorties qui permet d'atteindre une meilleure performance de test. / In the era of Big Data, we need efficient and scalable machine learning algorithms which can perform automatic classification of Tera-Bytes of data. In this thesis, we study the machine learning challenges for classification in large-scale taxonomies. These challenges include computational complexity of training and prediction and the performance on unseen data. In the first part of the thesis, we study the underlying power-law distribution in large-scale taxonomies. This analysis then motivates the derivation of bounds on space complexity of hierarchical classifiers. Exploiting the study of this distribution further, we then design classification scheme which leads to better accuracy on large-scale power-law distributed categories. We also propose an efficient method for model-selection when training multi-class version of classifiers such as Support Vector Machine and Logistic Regression. Finally, we address another key model selection problem in large scale classification concerning the choice between flat versus hierarchical classification from a learning theoretic aspect. The presented generalization error analysis provides an explanation to empirical findings in many recent studies in large-scale hierarchical classification. We further exploit the developed bounds to propose two methods for adapting the given taxonomy of categories to output taxonomies which yield better test accuracy when used in a top-down setup.
4

De l'indexation d'évènements dans des films : application à la détection de violence / On events indexing in movies : application to violence detection

Penet, Cédric 10 October 2013 (has links)
Dans cette thèse, nous nous intéressons à la détection de concepts sémantiques dans des films "Hollywoodiens" à l'aide de concepts audio et vidéos, dans le cadre applicatif de la détection de violence. Nos travaux se portent sur deux axes : la détection de concepts audio violents, tels que les coups de feu et les explosions, puis la détection de violence, dans un premier temps uniquement fondée sur l'audio, et dans un deuxième temps fondée sur l'audio et la vidéo. Dans le cadre de la détection de concepts audio, nous mettons tout d'abord un problème de généralisation en lumière, et nous montrons que ce problème est probablement dû à une divergence statistique entre les attributs audio extraits des films. Nous proposons pour résoudre ce problème d'utiliser le concept des mots audio, de façon à réduire cette variabilité en groupant les échantillons par similarité, associé à des réseaux Bayésiens contextuels. Les résultats obtenus sont très encourageants, et une comparaison avec un état de l'art obtenu sur les même données montre que les résultats sont équivalents. Le système obtenu peut être soit très robuste vis-à-vis du seuil appliqué en utilisant la fusion précoce des attributs, soit proposer une grande variété de points de fonctionnement. Nous proposons enfin une adaptation de l'analyse factorielle développée dans le cadre de la reconnaissance du locuteur, et montrons que son intégration dans notre système améliore les résultats obtenus. Dans le cadre de la détection de violence, nous présentons la campagne d'évaluation MediaEval Affect Task 2012, dont l'objectif est de regrouper les équipes travaillant sur le sujet de la détection de violence. Nous proposons ensuite trois systèmes pour détecter la violence, deux fondés uniquement sur l'audio, le premier utilisant une description TF-IDF, et le second étant une intégration du système de détection de concepts audio dans le cadre de la détection violence, et un système multimodal utilisant l'apprentissage de structures de graphe dans des réseaux bayésiens. Les performances obtenues dans le cadre des différents systèmes, et une comparaison avec les systèmes développés dans le cadre de MediaEval, montrent que nous sommes au niveau de l'état de l'art, et révèlent la complexité de tels systèmes. / In this thesis, we focus on the detection of semantic concepts in "Hollywood" movies using audio and video concepts for the detection of violence. We present experiments in two main areas : the detection of violent audio concepts such as gunshots and explosions, and the detection of violence, initially based only on audio, then based on both audio and video. In the context of audio concepts detection, we first show a generalisation arising between movies. We show that this problem is probably due to a statistical divergence between the audio features extracted from the movies. In order to solve it, we propose to use the concept of audio words, so as to reduce the variability by grouping samples by similarity, combined with contextual Bayesian networks. The results are very encouraging, and a comparison with the state of the art obtained on the same data shows that the results we obtain are equivalent. The resulting system can be either robust against the threshold applied by using early fusion of features, or provides a wide variety of operating points. We finally propose an adaptation of the factor analysis scheme developed in the context of speaker recognition, and show that its integration into our system improves the results. In the context of the detection of violence, we present the Mediaeval Affect Task 2012 evaluation campaign, which aims at bringing together teams working on the topic of violence detection. We then propose three systems for detecting the violence. The first two are based only on audio, the first using a TF-IDF description, and the second being the integration of the previous system for the detection violence. The last system we present is a multimodal system based on Bayesian networks that allows us to explore structure learning algorithms for graphs. The performance obtained in the different systems, and a comparison to the systems developed within Mediaeval, show that we are comparable to the state of the art, and show the complexity of such systems.
5

Repérage automatique de séquences figées / Automatic extraction of fixed sequences

Joseph, Aurélie 18 December 2013 (has links)
Cette thèse vise à proposer un modèle théorique et une méthodologie permettant d’effectuer des analyses linguistiques fines des textes, permettant de rendre compte des éléments utiles des courriers à savoir : le motif/objet du courrier, l’émetteur et le destinataire du courrier. L’approche doit permettre un traitement efficace des verrous technologiques du TAL, et spécialement le problème du figement et plus particulièrement des séquences verbales figées. Il s’agit d’un phénomène extrêmement fréquent dans toutes les langues, présenté comme une des sources de difficultés majeures pour la recherche d’information et la veille dans les documents dits non structurés. La thèse comprendra une partie applicative démontrant l’efficacité de la théorie proposee, et aboutissant à un système de traitement automatique des courriers. Par ailleurs, la démarche méthodologique aboutissant aux ressources linguistiques doit permettre de définir un outil d’apprentissage automatique de ces ressources qui pourra ainsi être appliqué à de nouveaux types de documents. / The aim of this thesis is to propose a theoretic model and a methodology to make fine linguistic text analysis. That can represent elements useful in mails like: message purposes, message addressee or sender. This approach must permit an efficient processing of NLP technology issues, especially in the fixity problematic and mainly on fixed verbal sequences. This phenomenon is extremely recurrent in all languages. It is introduced as a main issue for information retrieval in unstructured documents. This thesis will include an applicative part showing the relevance of the proposed theory and to make a system to automatically process mails. Moreover, the methodology which creating linguistically resources must permit to define an automatic learning resources tool which can be applied on new kind of documents.
6

Apprentissage de grammaires catégorielles : transducteurs d’arbres et clustering pour induction de grammaires catégorielles / Learning categorial grammars

Sandillon Rezer, Noémie Fleur 09 December 2013 (has links)
De nos jours, il n’est pas rare d’utiliser des logiciels capables d’avoir une conversation, d’interagir avec nous (systèmes questions/réponses pour les SAV, gestion d’interface ou simplement Intelligence Artificielle - IA - de discussion). Ceux-ci doivent comprendre le contexte ou réagir par mot-clefs, mais générer ensuite des réponses cohérentes, aussi bien au niveau du sens de la phrase (sémantique) que de la forme (syntaxe). Si les premières IA se contentaient de phrases toutes faites et réagissaient en fonction de mots-clefs, le processus s’est complexifié avec le temps. Pour améliorer celui-ci, il faut comprendre et étudier la construction des phrases. Nous nous focalisons sur la syntaxe et sa modélisation avec des grammaires catégorielles. L’idée est de pouvoir aussi bien générer des squelettes de phrases syntaxiquement correctes que vérifier l’appartenance d’une phrase à un langage, ici le français (il manque l’aspect sémantique). On note que les grammaires AB peuvent, à l’exception de certains phénomènes comme la quantification et l’extraction, servir de base pour la sémantique en extrayant des λ-termes. Nous couvrons aussi bien l’aspect d’extraction de grammaire à partir de corpus arborés que l’analyse de phrases. Pour ce faire, nous présentons deux méthodes d’extraction et une méthode d’analyse de phrases permettant de tester nos grammaires. La première méthode consiste en la création d’un transducteur d’arbres généralisé, qui transforme les arbres syntaxiques en arbres de dérivation d’une grammaire AB. Appliqué sur les corpus français que nous avons à notre disposition, il permet d’avoir une grammaire assez complète de la langue française, ainsi qu’un vaste lexique. Le transducteur, même s’il s’éloigne peu de la définition usuelle d’un transducteur descendant, a pour particularité d’offrir une nouvelle méthode d’écriture des règles de transduction, permettant une définition compacte de celles-ci. Nous transformons actuellement 92,5% des corpus en arbres de dérivation. Pour notre seconde méthode, nous utilisons un algorithme d’unification en guidant celui-ci avec une étape préliminaire de clustering, qui rassemble les mots en fonction de leur contexte dans la phrase. La comparaison avec les arbres extraits du transducteur donne des résultats encourageants avec 91,3% de similarité. Enfin, nous mettons en place une version probabiliste de l’algorithme CYK pour tester l’efficacité de nos grammaires en analyse de phrases. La couverture obtenue est entre 84,6% et 92,6%, en fonction de l’ensemble de phrases pris en entrée. Les probabilités, appliquées aussi bien sur le type des mots lorsque ceux-ci en ont plusieurs que sur les règles, permettent de sélectionner uniquement le “meilleur” arbre de dérivation.Tous nos logiciels sont disponibles au téléchargement sous licence GNU GPL. / Nowadays, we have become familiar with software interacting with us using natural language (for example in question-answering systems for after-sale services, human-computer interaction or simple discussion bots). These tools have to either react by keyword extraction or, more ambitiously, try to understand the sentence in its context. Though the simplest of these programs only have a set of pre-programmed sentences to react to recognized keywords (these systems include Eliza but also more modern systems like Siri), more sophisticated systems make an effort to understand the structure and the meaning of sentences (these include systems like Watson), allowing them to generate consistent answers, both with respect to the meaning of the sentence (semantics) and with respect to its form (syntax). In this thesis, we focus on syntax and on how to model syntax using categorial grammars. Our goal is to generate syntactically accurate sentences (without the semantic aspect) and to verify that a given sentence belongs to a language - the French language. We note that AB grammars, with the exception of some phenomena like quantification or extraction, are also a good basis for semantic purposes. We cover both grammar extraction from treebanks and parsing using the extracted grammars. On this purpose, we present two extraction methods and test the resulting grammars using standard parsing algorithms. The first method focuses on creating a generalized tree transducer, which transforms syntactic trees into derivation trees corresponding to an AB grammar. Applied on the various French treebanks, the transducer’s output gives us a wide-coverage lexicon and a grammar suitable for parsing. The transducer, even if it differs only slightly from the usual definition of a top-down transducer, offers several new, compact ways to express transduction rules. We currently transduce 92.5% of all sen- tences in the treebanks into derivation trees.For our second method, we use a unification algorithm, guiding it with a preliminary clustering step, which gathers the words according to their context in the sentence. The comparision between the transduced trees and this method gives the promising result of 91.3% of similarity.Finally, we have tested our grammars on sentence analysis with a probabilistic CYK algorithm and a formula assignment step done with a supertagger. The obtained coverage lies between 84.6% and 92.6%, depending on the input corpus. The probabilities, estimated for the type of words and for the rules, enable us to select only the “best” derivation tree. All our software is available for download under GNU GPL licence.
7

Recherche d'images par le contenu application à la proposition de mots clés / Image search by content and keyword proposal

Zhou, Zhyiong 08 February 2018 (has links)
La recherche d'information dans des masses de données multimédia et l'indexation de ces grandes bases d'images par le contenu sont des problématiques très actuelles. Elles s'inscrivent dans un type de gestion de données qu'on appelle le Digital Asset Management (ou DAM) ; Le DAM fait appel à des techniques de segmentation d'images et de classification de données. Nos principales contributions dans cette thèse peuvent se résumer en trois points :- Analyse des utilisations possibles des différentes méthodes d'extraction des caractéristiques locales en exploitant la technique de VLAD.- Proposition d'une nouvelle méthode d'extraction de l'information relative à la couleur dominante dans une image.- Comparaison des Machine à Supports de Vecteur (SVM - Support Vector Machine) à différents classifieurs pour la proposition de mots clés d'indexation. Ces contributions ont été testées et validées sur des données de synthèse et sur des données réelles. Nos méthodes ont alors été largement utilisées dans le système DAM ePhoto développé par la société EINDEN, qui a financé la thèse CIFRE dans le cadre de laquelle ce travail a été effectué. Les résultats sont encourageants et ouvrent de nouvelles perspectives de recherche. / The search for information in masses of multimedia data and the indexing of these large databases by the content are very current problems. They are part of a type of data management called Digital Asset Management (or DAM) ; The DAM uses image segmentation and data classification techniques.Our main contributions in this thesis can be summarized in three points : - Analysis of the possible uses of different methods of extraction of local characteristics using the VLAD technique.- Proposed a new method for extracting dominant color information in an image.- Comparison of Support Vector Machines (SVM) to different classifiers for the proposed indexing keywords. These contributions have been tested and validated on summary data and on actual data. Our methods were then widely used in the DAM ePhoto system developed by the company EINDEN, which financed the CIFRE thesis in which this work was carried out. The results are encouraging and open new perspectives for research.
8

[en] METHODS FOR ACCELERATION OF LEARNING PROCESS OF REINFORCEMENT LEARNING NEURO-FUZZY HIERARCHICAL POLITREE MODEL / [pt] MÉTODOS DE ACELERAÇÃO DE APRENDIZADO APLICADO AO MODELO NEURO-FUZZY HIERÁRQUICO POLITREE COM APRENDIZADO POR REFORÇO

FABIO JESSEN WERNECK DE ALMEIDA MARTINS 04 October 2010 (has links)
[pt] Neste trabalho foram desenvolvidos e avaliados métodos com o objetivo de melhorar e acelerar o processo de aprendizado do modelo de Reinforcement Learning Neuro-Fuzzy Hierárquico Politree (RL-NFHP). Este modelo pode ser utilizado para dotar um agente de inteligência através de processo de Aprendizado por Reforço (Reinforcement Learning). O modelo RL-NFHP apresenta as seguintes características: aprendizado automático da estrutura do modelo; auto-ajuste dos parâmetros associados à estrutura; capacidade de aprendizado da ação a ser adotada quando o agente está em um determinado estado do ambiente; possibilidade de lidar com um número maior de entradas do que os sistemas neuro-fuzzy tradicionais; e geração de regras linguísticas com hierarquia. Com intenção de melhorar e acelerar o processo de aprendizado do modelo foram implementadas seis políticas de seleção, sendo uma delas uma inovação deste trabalho (Q-DC-roulette); implementado o método early stopping para determinação automática do fim do treinamento; desenvolvido o eligibility trace cumulativo; criado um método de poda da estrutura, para eliminação de células desnecessárias; além da reescrita do código computacional original. O modelo RL-NFHP modificado foi avaliado em três aplicações: o benchmark Carro na Montanha simulado, conhecido na área de agentes autônomos; uma simulação robótica baseada no robô Khepera; e uma num robô real NXT. Os testes efetuados demonstram que este modelo modificado se ajustou bem a problemas de sistemas de controle e robótica, apresentando boa generalização. Comparado o modelo RL-NFHP modificado com o original, houve aceleração do aprendizado e obtenção de menores modelos treinados. / [en] In this work, methods were developed and evaluated in order to improve and accelerate the learning process of Reinforcement Learning Neuro-Fuzzy Hierarchical Politree Model (RL-NFHP). This model is employed to provide an agent with intelligence, making it autonomous, due to the capacity of ratiocinate (infer actions) and learning, acquired knowledge through interaction with the environment by Reinforcement Learning process. The RL-NFHP model has the following features: automatic learning of structure of the model; self-adjustment of parameters associated with its structure, ability to learn the action to be taken when the agent is in a particular state of the environment; ability to handle a larger number of inputs than the traditional neuro-fuzzy systems; and generation of rules with linguistic interpretable hierarchy. With the aim to improve and accelerate the learning process of the model, six selection action policies were developed, one of them an innovation of this work (Q-DC-roulette); implemented the early stopping method for automatically determining the end of the training; developed a cumulative eligibility trace; created a method of pruning the structure, for removing unnecessary cells; in addition to rewriting the original computer code. The modified RL-NFHP model was evaluated in three applications: the simulated benchmark Car-Mountain problem, well known in the area of autonomous agents; a simulated application in robotics based on the Khepera robot; and an application in a real robot. The experiments show that this modified model fits well the problems of control systems and robotics, with a good generalization. Compared the modified RL-NFHP model with the original one, there was acceleration of learning process and smaller structures of the model trained.
9

[en] AUTOMATIC INTERPRETATION OF EQUIPMENT OPERATION REPORTS / [pt] INTERPRETAÇÃO AUTOMÁTICA DE RELATÓRIOS DE OPERAÇÃO DE EQUIPAMENTOS

PEDRO HENRIQUE THOMPSON FURTADO 28 July 2017 (has links)
[pt] As unidades operacionais da área de Exploração e Produção (EeP) da PETROBRAS utilizam relatórios diários para o registro de situações e eventos em Unidades Estacionárias de Produção (UEPs), as conhecidas plataformas de produção de petróleo. Um destes relatórios, o SITOP (Situação Operacional das Unidades Marítimas), é um documento diário em texto livre que apresenta informações numéricas (índices de produção, algumas vazões, etc.) e, principalmente, informações textuais. A parte textual, apesar de não estruturada, encerra uma valiosíssima base de dados de histórico de eventos no ambiente de produção, tais como: quebras de válvulas, falhas em equipamentos de processo, início e término de manutenções, manobras executadas, responsabilidades etc. O valor destes dados é alto, mas o custo da busca de informações também o é, pois se demanda a atenção de técnicos da empresa na leitura de uma enorme quantidade de documentos. O objetivo do presente trabalho é o desenvolvimento de um modelo de processamento de linguagem natural para a identificação, nos textos dos SITOPs, de entidades nomeadas e extração de relações entre estas entidades, descritas formalmente em uma ontologia de domínio aplicada a eventos em unidades de processamento de petróleo e gás em ambiente offshore. Ter-se-á, portanto, um método de estruturação automática da informação presente nestes relatórios operacionais. Os resultados obtidos demonstram que a metodologia é útil para este caso, ainda que passível de melhorias em diferentes frentes. A extração de relações apresenta melhores resultados que a identificação de entidades, o que pode ser explicado pela diferença entre o número de classes das duas tarefas. Verifica-se também que o aumento na quantidade de dados é um dos fatores mais importantes para a melhoria do aprendizado e da eficiência da metodologia como um todo. / [en] The operational units at the Exploration and Production (E and P) area at PETROBRAS make use of daily reports to register situations and events from their Stationary Production Units (SPUs), the well-known petroleum production platforms. One of these reports, called SITOP (the Portuguese acronym for Offshore Unities Operational Situation), is a daily document in free text format that presents numerical information and, mainly, textual information about operational situation of offshore units. The textual section, although unstructured, stores a valuable database with historical events in the production environment, such as: valve breakages, failures in processing equipment, beginning and end of maintenance activities, actions executed, responsibilities, etc. The value of these data is high, as well as the costs of searching relevant information, consuming many hours of attention from technicians and engineers to read the large number of documents. The goal of this dissertation is to develop a model of natural language processing to recognize named entities and extract relations among them, described formally as a domain ontology applied to events in offshore oil and gas processing units. After all, there will be a method for automatic structuring of the information from these operational reports. Our results show that this methodology is useful in SITOP s case, also indicating some possible enhancements. Relation extraction showed better results than named entity recognition, what can be explained by the difference in the amount of classes in these tasks. We also verified that the increase in the amount of data was one of the most important factors for the improvement in learning and methodology efficiency as a whole.
10

Uma investigação sobre o processo migrátorio para a plataforma de computação em nuvem no Brasil

SILVA, Hilson Barbosa da 22 January 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-10-31T12:50:59Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) UMA INVESTIGAÇÃO SOBRE O PROCESSO MIGRATÓRIO PARA A PLATAFORMA DE COMPUTAÇÃO EM NUVEM NO BRASIL.pdf: 2425763 bytes, checksum: 20f3a5ca31db4bf99450bc873fe1b9d3 (MD5) / Made available in DSpace on 2016-10-31T12:50:59Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) UMA INVESTIGAÇÃO SOBRE O PROCESSO MIGRATÓRIO PARA A PLATAFORMA DE COMPUTAÇÃO EM NUVEM NO BRASIL.pdf: 2425763 bytes, checksum: 20f3a5ca31db4bf99450bc873fe1b9d3 (MD5) Previous issue date: 2016-01-22 / Contexto: A Computação em Nuvem apresenta um novo conceito de terceirização na contratação de serviço, esses avanços vêm sendo vistos como uma nova possibilidade para a redução nos volumes dos investimentos em TIC, proporcionados pela maior flexibilidade nos serviços ofertados sob demanda, tendo na redução de custo seu apelo mais forte. Mesmo sabendo dos benefícios do investimento em nuvem, presume-se que algumas empresas são receosas na contratação de serviços e/ou infraestruturas de TIC da computação em nuvem. Essa realidade, apresentada na pesquisa da Tech Supply, especializada em Inteligência Tecnológica para Auditoria e Integridade Corporativa e TI, segundo a qual 43% das empresas brasileiras não se sentem seguras para migrar os seus sistemas para nuvem. Objetivo: Nesse contexto geral, apresentam-se dois objetivos: investigar os indícios pelos quais algumas empresas podem estar propensas a contratarem ou não os serviços de Computação em Nuvem no Brasil. Adicionalmente, identificar i e e j de sua satisfação ou insatisfação em relação aos serviços de nuvem contratados no Brasil. Método: Para este estudo, definiu-se o tipo de pesquisa realizada como exploratória de natureza descritiva e explicativa, com ênfase na abordagem quantitativa. Quanto ao procedimento técnico, aplicou-se um levantamento através de um Survey, utilizando-se o instrumento de um questionário com 14 (quatorze) itens. Referente à coleta dessas informações, disponibilizou-se através de um formulário WEB (Online). E, por fim, quanto ao tipo de análise aplicada aos resultados, utilizou-se o aprendizado automático para extração dos resultados. Com o uso de aprendizado automático, faz-se necessário o estabelecimento de algumas definições em relação aos métodos de aprendizagem a serem aplicados, como tarefa de classificação por árvore de decisão com algoritmo de classificação J48, método de aprendizagem por indução. Para o modo de treinamento, aplicou-se o não incremental. Na hierarquia do aprendizado, utilizou-se o aprendizado supervisionado e para o paradigma de aprendizado, usou-se o simbólico. Definiram-se também as variáveis classificadoras para cada linha de investigação: “SIM” en c n “NÃO”, para as empresas que não usam; e “SATISFEITO” ou “INSATISFEITO” c n e , para as empresas que já usam. Resultado: Descobriu-se que as características das empresas que estão propensas a contratar a nuvem são garantia de entrega e qualidade dos serviços. Em contrapartida, as empresas que não estão propensas a contratar os serviços da nuvem têm como características o baixo faturamento e poucos colaboradores associados à confiabilidade e segurança da informação. Para a outra linha de investigação, em relação à satisfação, os motivos são o preço da nuvem associado aos modelos de Infraestrutura e Software como Serviço. Por outro lado, para as empresas que estão insatisfeitas, os motivos são segurança da informação, disponibilidade dos serviços associados à redução de custo. / Context: Cloud computing presents a new concept of outsourcing at hiring services, these advances have been seen as a new possibility for reduction at volume of investments in ICT, provided for greater flexibility in offered on-demand services, with cost reduction its strongest appeal. Even though the c d in e en benefi i ‟ assumed that some companies are afraid for contracting services and / or cloud c ing ICT inf c e. Thi e i y e en ed in he Tech S y‟ e e ch specializing in Technology Intelligence for Audit and Corporate Integrity and IT, according to which 43% of Brazilian companies do not feel safe to migrate their cloud systems. Objective: In general, there are two objectives: to investigate the evidence by which some companies may be prone to hire or not the Computing Cloud services in Brazil. In addition, identify the reasons for those that already use their satisfaction or dissatisfaction with the cloud services contracted in Brazil. Method: For this study, the type of research conducted was defined as exploratory of descriptive and explanatory nature, with an emphasis on quantitative approach. As for the technical procedure, was applied a survey through a Survey, using the instrument of a questionnaire with 14 (fourteen) items. Concerning the collection of this information, it made available through a web form (Online). Finally, the type of analysis applied to the results, we used the automatic learning for extracting results. With the use of automatic learning, it is necessary to establish some definitions regarding learning methods to applied as a classification task by decision tree classification algorithm J48¹, learning method for induction. For the training mode, applied to the non-incremental. In the learning hierarchy, we used supervised learning and the learning paradigm, was used the symbolic. The classification variables was defined for each research line: "YES" likely to hire or "NO" for companies that do not use; and "SATISFIED" or "DISSATISFIED" with the cloud, for companies that already use. Result: It found that the characteristics of companies that are likely to hire the cloud are delivery assurance and service quality. Conversely, companies that are not likely to hire cloud services characterized by low turnover and few employees associated with the reliability and information security. For another line of research in relation to satisfaction, the reasons are the price associated with cloud models Infrastructure and Software as a Service. On the other hand, for companies that are dissatisfied, the reasons are information security, availability of services associated with cost reduction.

Page generated in 0.5037 seconds