• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 13
  • 11
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 64
  • 64
  • 20
  • 16
  • 15
  • 14
  • 13
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Contribution to complex visual information processing and autonomous knowledge extraction : application to autonomous robotics / Contribution au traitement d’informations visuelles complexes et à l’extraction autonome des connaissances : application à la robotique autonome

Ramik, Dominik Maximilián 10 December 2012 (has links)
Le travail effectué lors de cette thèse concerne le développement d'un système cognitif artificiel autonome. La solution proposée repose sur l'hypothèse que la curiosité est une source de motivation d'un système cognitif dans le processus d'acquisition des nouvelles connaissances. En outre, deux types distincts de curiosité ont été identifiés conformément au système cognitif humain. Sur ce principe, une architecture cognitive à deux niveaux a été proposée. Le bas-niveau repose sur le principe de la saillance perceptive, tandis que le haut-niveau réalise l'acquisition des connaissances par l'observation et l'interaction avec l'environnement. Cette thèse apporte les contributions suivantes : A) Un état de l'art sur l'acquisition autonome de connaissance. B) L'étude, la conception et la réalisation d'un système cognitif bas-niveau basé sur le principe de la curiosité perceptive. L'approche proposée repose sur la saillance visuelle réalisée grâce au développement d'un algorithme rapide et robuste permettant la détection et l'apprentissage d'objets saillants. C) La conception d'un système cognitif haut-niveau, basé sur une approche générique, permettant l'acquisition de connaissance à partir de l'observation et de l'interaction avec son environnent (y compris avec les êtres humains). Basé sur la curiosité épistémique, le système cognitif haut-niveau développé permet à une machine (par exemple un robot) de devenir l'acteur de son propre apprentissage. Une conséquence substantielle d'un tel système est la possibilité de conférer des capacités cognitives haut-niveau multimodales à des robots pour accroître leur autonomie dans un environnement réel (environnement humain). D) La mise en œuvre de la stratégie proposée dans le cadre de la robotique autonome. Les études et les validations expérimentales réalisées ont notamment confirmé que notre approche permet d'accroître l'autonomie des robots dans un environnement réel / The work accomplished in this thesis concerns development of an autonomous machine cognition system. The proposed solution reposes on the assumption that it is the curiosity which motivates a cognitive system to acquire new knowledge. Further, two distinct kinds of curiosity are identified in conformity to human cognitive system. On this I build a two level cognitive architecture. I identify its lower level with the perceptual saliency mechanism, while the higher level performs knowledge acquisition from observation and interaction with the environment. This thesis brings the following contribution: A) Investigation of the state of the art in autonomous knowledge acquisition. B) Realization of a lower cognitive level in the ensemble of the mentioned system, which is realizing the perceptual curiosity mechanism through a novel fast, real-world robust algorithm for salient object detection and learning. C) Realization of a higher cognitive level through a general framework for knowledge acquisition from observation and interaction with the environment including humans. Based on the epistemic curiosity, the high-level cognitive system enables a machine (e.g. a robot) to be itself the actor of its learning. An important consequence of this system is the possibility to confer high level multimodal cognitive capabilities to robots to increase their autonomy in real-world environment (human environment). D) Realization of the strategy proposed in the context of autonomous robotics. The studies and experimental validations done had confirmed notably that our approach allows increasing the autonomy of robots in real-world environment
32

Un système interactif et itératif extraction de connaissances exploitant l'analyse formelle de concepts / An Interactive and Iterative Knowledge Extraction Process Using Formal Concept Analysis

Tang, My Thao 30 June 2016 (has links)
Dans cette thèse, nous présentons notre méthodologie de la connaissance interactive et itérative pour une extraction des textes - le système KESAM: Un outil pour l'extraction des connaissances et le Management de l’Annotation Sémantique. Le KESAM est basé sur l'analyse formelle du concept pour l'extraction des connaissances à partir de ressources textuelles qui prend en charge l'interaction aux experts. Dans le système KESAM, l’extraction des connaissances et l'annotation sémantique sont unifiées en un seul processus pour bénéficier à la fois l'extraction des connaissances et l'annotation sémantique. Les annotations sémantiques sont utilisées pour formaliser la source de la connaissance dans les textes et garder la traçabilité entre le modèle de la connaissance et la source de la connaissance. Le modèle de connaissance est, en revanche, utilisé afin d’améliorer les annotations sémantiques. Le processus KESAM a été conçu pour préserver en permanence le lien entre les ressources (textes et annotations sémantiques) et le modèle de la connaissance. Le noyau du processus est l'Analyse Formelle de Concepts (AFC) qui construit le modèle de la connaissance, i.e. le treillis de concepts, et assure le lien entre le modèle et les annotations des connaissances. Afin d'obtenir le résultat du treillis aussi près que possible aux besoins des experts de ce domaine, nous introduisons un processus itératif qui permet une interaction des experts sur le treillis. Les experts sont invités à évaluer et à affiner le réseau; ils peuvent faire des changements dans le treillis jusqu'à ce qu'ils parviennent à un accord entre le modèle et leurs propres connaissances ou le besoin de l’application. Grâce au lien entre le modèle des connaissances et des annotations sémantiques, le modèle de la connaissance et les annotations sémantiques peuvent co-évoluer afin d'améliorer leur qualité par rapport aux exigences des experts du domaine. En outre, à l'aide de l’AFC de la construction des concepts avec les définitions des ensembles des objets et des ensembles d'attributs, le système KESAM est capable de prendre en compte les deux concepts atomiques et définis, à savoir les concepts qui sont définis par un ensemble des attributs. Afin de combler l'écart possible entre le modèle de représentation basé sur un treillis de concept et le modèle de représentation d'un expert du domaine, nous présentons ensuite une méthode formelle pour l'intégration des connaissances d’expert en treillis des concepts d'une manière telle que nous pouvons maintenir la structure des concepts du treillis. La connaissance d’expert est codée comme un ensemble de dépendance de l'attribut qui est aligné avec l'ensemble des implications fournies par le concept du treillis, ce qui conduit à des modifications dans le treillis d'origine. La méthode permet également aux experts de garder une trace des changements qui se produisent dans le treillis d'origine et la version finale contrainte, et d'accéder à la façon dont les concepts dans la pratique sont liés à des concepts émis automatiquement à partir des données. Nous pouvons construire les treillis contraints sans changer les données et fournir la trace des changements en utilisant des projections extensives sur treillis. À partir d'un treillis d'origine, deux projections différentes produisent deux treillis contraints différents, et, par conséquent, l'écart entre le modèle de représentation basée sur un treillis de réflexion et le modèle de représentation d'un expert du domaine est rempli avec des projections / In this thesis, we present a methodology for interactive and iterative extracting knowledge from texts - the KESAM system: A tool for Knowledge Extraction and Semantic Annotation Management. KESAM is based on Formal Concept Analysis for extracting knowledge from textual resources that supports expert interaction. In the KESAM system, knowledge extraction and semantic annotation are unified into one single process to benefit both knowledge extraction and semantic annotation. Semantic annotations are used for formalizing the source of knowledge in texts and keeping the traceability between the knowledge model and the source of knowledge. The knowledge model is, in return, used for improving semantic annotations. The KESAM process has been designed to permanently preserve the link between the resources (texts and semantic annotations) and the knowledge model. The core of the process is Formal Concept Analysis that builds the knowledge model, i.e. the concept lattice, and ensures the link between the knowledge model and annotations. In order to get the resulting lattice as close as possible to domain experts' requirements, we introduce an iterative process that enables expert interaction on the lattice. Experts are invited to evaluate and refine the lattice; they can make changes in the lattice until they reach an agreement between the model and their own knowledge or application's need. Thanks to the link between the knowledge model and semantic annotations, the knowledge model and semantic annotations can co-evolve in order to improve their quality with respect to domain experts' requirements. Moreover, by using FCA to build concepts with definitions of sets of objects and sets of attributes, the KESAM system is able to take into account both atomic and defined concepts, i.e. concepts that are defined by a set of attributes. In order to bridge the possible gap between the representation model based on a concept lattice and the representation model of a domain expert, we then introduce a formal method for integrating expert knowledge into concept lattices in such a way that we can maintain the lattice structure. The expert knowledge is encoded as a set of attribute dependencies which is aligned with the set of implications provided by the concept lattice, leading to modifications in the original lattice. The method also allows the experts to keep a trace of changes occurring in the original lattice and the final constrained version, and to access how concepts in practice are related to concepts automatically issued from data. The method uses extensional projections to build the constrained lattices without changing the original data and provide the trace of changes. From an original lattice, two different projections produce two different constrained lattices, and thus, the gap between the representation model based on a concept lattice and the representation model of a domain expert is filled with projections.
33

Extração de conhecimento simbólico em técnicas de aprendizado de máquina caixa-preta por similaridade de rankings / Symbolic knowledge extraction from black-box machine learning techniques with ranking similarities

Bianchi, Rodrigo Elias 26 September 2008 (has links)
Técnicas de Aprendizado de Máquina não-simbólicas, como Redes Neurais Artificiais, Máquinas de Vetores de Suporte e combinação de classificadores têm mostrado um bom desempenho quando utilizadas para análise de dados. A grande limitação dessas técnicas é a falta de compreensibilidade do conhecimento armazenado em suas estruturas internas. Esta Tese apresenta uma pesquisa realizada sobre métodos de extração de representações compreensíveis do conhecimento armazenado nas estruturas internas dessas técnicas não-simbólicas, aqui chamadas de caixa preta, durante seu processo de aprendizado. A principal contribuição desse trabalho é a proposta de um novo método pedagógico para extração de regras que expliquem o processo de classificação seguido por técnicas não-simbólicas. Esse novo método é baseado na otimização (maximização) da similaridade entre rankings de classificação produzidos por técnicas de Aprendizado de Máquina simbólicas e não simbólicas (de onde o conhecimento interno esta sendo extraído). Experimentos foram realizados com vários conjuntos de dados e os resultados obtidos sugerem um bom potencial para o método proposto / Non-symbolic Machine Learning techniques, like Artificial Neural Networks, Support Vector Machines and Ensembles of classifiers have shown a good performance when they are used in data analysis. The strong limitation regarding the use of these techniques is the lack of comprehensibility of the knowledge stored in their internal structure. This Thesis presents an investigation of methods capable of extracting comprehensible representations of the knowledge acquired by these non-symbolic techniques, here named black box, during their learning process. The main contribution of this work is the proposal of a new pedagogical method for rule extraction that explains the classification process followed by non-symbolic techniques. This new method is based on the optimization (maximization) of the similarity between classification rankings produced by symbolic and non-symbolic (from where the internal knowledge is being extracted) Machine Learning techniques. Experiments were performed for several datasets and the results obtained suggest a good potential of the proposed method
34

Modélisation spatio-temporelle multi-niveau à base d'ontologies pour le suivi de la dynamique en imagerie satellitaire / Ontology-based multi-level spatio-temporal modeling for monitoring dynamics in satellite imagery

Ghazouani, Fethi 10 December 2018 (has links)
La modélisation de la dynamique des objets spatio-temporels fait partie des sujets de recherche pour le suivi et l'interprétation des changements affectant le globe terrestre. Pour cela, l'exploitation des images satellitaires se présente comme un moyen efficace qui aide à l'étude de la dynamique des phénomènes spatio-temporels qui peuvent se produire sur la surface de la Terre notamment l'urbanisation, la déforestation, la désertification, etc. Divers modèles et approches ont été proposés pour modéliser les évolutions des objets spatio-temporels. Toutes fois, chaque modèle présente une capacité limitée pour capturer l'évolution des différentes caractéristiques de l'environnement, en plus la structure de représentation utilisée par chaque modèle ne permet pas de saisir complètement la sémantique de l'évolution d'un objet spatio-temporel. Les travaux de notre thèse s'intéressent à la modélisation de la dynamique des objets spatio-temporels pour l'interprétation des changements en imagerie satellitaire. En conséquence, nous avons proposé dans un premier temps une architecture ontologiques multi-niveaux pour la représentation et la modélisation des objets et des processus spatio-temporels dynamiques. Également, nous avons présenté une nouvelle stratégie d'interprétation sémantique de scènes d'images satellites pour l'interprétation de changements. Le cadre applicatif concerne l'interprétation sémantique d'une scène d'images satellites pour l'interprétation des phénomènes de changements, tels que l'urbanisation et la déforestation. Le résultat obtenu est une carte de changements qui pourra guider une meilleure gestion de l'utilisation/couverture des sols. / Modeling the dynamics of spatio-temporal objects is part of the research subjects for monitoring and interpretation of the changes affecting the Earth. Satellite images are an effective way for studying the dynamics of spatio-temporal phenomena, including urbanization, deforestation, flooding, desertification, and so on, that can occur on the surface of the Earth. Various models and approaches have been proposed to model the evolution of the spatio-temporal objects. However, each of these models has a limited ability to capture the evolution of the different characteristics of the environment, and the representation structure used by each model does not fully capture the semantics of the evolution of a spatio-temporal object. The works of our thesis interested in modeling the dynamics of spatio-temporal objects for changes interpretation in satellite imagery. Therefore, we proposed initially a multi-level ontological architecture for representation and modeling the dynamic of spatio-temporal objects and process. Also, we have presented a new semantic scene interpretation strategy for change interpretation in remote sensing imagery. The application Framework concerns the semantic interpretation of a satellite images scenes for change interpretation of phenomena, such as urbanization and deforestation. The result is a change map that can guide better management of the land use/cover.
35

Extraction en langue chinoise d'actions spatiotemporalisées réalisées par des personnes ou des organismes / Extraction of spatiotemporally located actions performed by individuals or organizations from Chinese texts

Wang, Zhen 09 June 2016 (has links)
La thèse a deux objectifs : le premier est de développer un analyseur qui permet d'analyser automatiquement des sources textuelles en chinois simplifié afin de segmenter les textes en mots et de les étiqueter par catégories grammaticales, ainsi que de construire les relations syntaxiques entre les mots. Le deuxième est d'extraire des informations autour des entités et des actions qui nous intéressent à partir des textes analysés. Afin d'atteindre ces deux objectifs, nous avons traité principalement les problématiques suivantes : les ambiguïtés de segmentation, la catégorisation ; le traitement des mots inconnus dans les textes chinois ; l'ambiguïté de l'analyse syntaxique ; la reconnaissance et le typage des entités nommées. Le texte d'entrée est traité phrase par phrase. L'analyseur commence par un traitement typographique au sein des phrases afin d'identifier les écritures latines et les chiffres. Ensuite, nous segmentons la phrase en mots à l'aide de dictionnaires. Grâce aux règles linguistiques, nous créons des hypothèses de noms propres, changeons les poids des catégories ou des mots selon leur contextes gauches ou/et droits. Un modèle de langue n-gramme élaboré à partir d'un corpus d'apprentissage permet de sélectionner le meilleur résultat de segmentation et de catégorisation. Une analyse en dépendance est utilisée pour marquer les relations entre les mots. Nous effectuons une première identification d'entités nommées à la fin de l'analyse syntaxique. Ceci permet d'identifier les entités nommées en unité ou en groupe nominal et également de leur attribuer un type. Ces entités nommées sont ensuite utilisées dans l'extraction. Les règles d'extraction permettent de valider ou de changer les types des entités nommées. L'extraction des connaissances est composée des deux étapes : extraire et annoter automatiquement des contenus à partir des textes analysés ; vérifier les contenus extraits et résoudre la cohérence à travers une ontologie. / We have developed an automatic analyser and an extraction module for Chinese langage processing. The analyser performs automatic Chinese word segmentation based on linguistic rules and dictionaries, part-of-speech tagging based on n-gram statistics and dependency grammar parsing. The module allows to extract information around named entities and activities. In order to achieve these goals, we have tackled the following main issues: segmentation and part-of-speech ambiguity; unknown word identification in Chinese text; attachment ambiguity in parsing. Chinese texts are analysed sentence by sentence. Given a sentence, the analyzer begins with typographic processing to identify sequences of Latin characters and numbers. Then, dictionaries are used for preliminary segmentation into words. Linguistic-based rules are used to create proper noun hypotheses and change the weight of some word categories. These rules take into account word context. An n-gram language model is created from a training corpus and selects the best word segmentation and parts-of-speech. Dependency grammar parsing is used to annotate relations between words. A first step of named entity recognition is performed after parsing. Its goal is to identify single-word named entities and noun-phrase-based named entities and to determine their semantic type. These named entities are then used in knowledge extraction. Knowledge extraction rules are used to validate named entities or to change their types. Knowledge extraction consists of two steps: automatic content extraction and tagging from analysed text; extracted contents control and ontology-based co-reference resolution.
36

Seleção e construção de features relevantes para o aprendizado de máquina. / Relevant feature selection and construction for machine learning.

Lee, Huei Diana 27 April 2000 (has links)
No Aprendizado de Máquina Supervisionado - AM - é apresentado ao algoritmo de indução um conjunto de instâncias de treinamento, no qual cada instância é um vetor de features rotulado com a classe. O algoritmo de indução tem como tarefa induzir um classificador que será utilizado para classificar novas instâncias. Algoritmos de indução convencionais baseam-se nos dados fornecidos pelo usuário para construir as descrições dos conceitos. Uma representação inadequada do espaço de busca ou da linguagem de descrição do conjunto de instâncias, bem como erros nos exemplos de treinamento, podem tornar os problemas de aprendizado difícies. Um dos problemas centrais em AM é a Seleção de um Subconjunto de Features - SSF - na qual o objetivo é tentar diminuir o número de features que serão fornecidas ao algoritmo de indução. São várias as razões para a realização de SSF. A primeira é que a maioria dos algoritmos de AM, computacionalmente viáveis, não trabalham bem na presença de muitas features, isto é a precisão dos classificadores gerados pode ser melhorada com a aplicação de SSF. Ainda, com um número menor de features, a compreensibilidade do conceito induzido pode ser melhorada. Uma terceira razão é o alto custo para coletar e processar grande quantidade de dados. Existem, basicamente, três abordagens para a SSF: embedded, filtro e wrapper. Por outro lado, se as features utilizadas para descrever os exemplos de treinamento são inadequadas, os algoritmos de aprendizado estão propensos a criar descrições excessivamente complexas e imprecisas. Porém, essas features, individualmente inadequadas, podem algumas vezes serem, convenientemente, combinadas gerando novas features que podem mostrar-se altamente representativas para a descrição de um conceito. O processo de construção de novas features é conhecido como Construção de Features ou Indução Construtiva - IC. Neste trabalho são enfocadas as abordagens filtro e wrapper para a realização de SSF, bem como a IC guiada pelo conhecimento. É descrita uma série de experimentos usando SSF e IC utilizando quatro conjuntos de dados naturais e diversos algoritmos simbólicos de indução. Para cada conjunto de dados e cada indutor, são realizadas várias medidas, tais como, precisão, tempo de execução do indutor e número de features selecionadas pelo indutor. São descritos também diversos experimentos realizados utilizando três conjuntos de dados do mundo real. O foco desses experimentos não está somente na avaliação da performance dos algoritmos de indução, mas também na avaliação do conhecimento extraído. Durante a extração de conhecimento, os resultados foram apresentados aos especialistas para que fossem feitas sugestões para experimentos futuros. Uma parte do conhecimento extraído desses três estudos de casos foram considerados muito interessantes pelos especialistas. Isso mostra que a interação de diferentes áreas de conhecimento, neste caso específico, áreas médica e computacional, pode produzir resultados interessantes. Assim, para que a aplicação do Aprendizado de Máquina possa gerar frutos é necessário que dois grupos de pesquisadores sejam unidos: aqueles que conhecem os métodos de AM existentes e aqueles com o conhecimento no domínio da aplicação para o fornecimento de dados e a avaliação do conhecimento adquirido. / In supervised Machine Learning - ML - an induction algorithm is typically presented with a set of training instances, where each instance is described by a vector of feature values and a class label. The task of the induction algorithm (inducer) is to induce a classifier that will be useful in classifying new cases. Conventional inductive-learning algorithms rely on existing (user) provided data to build their descriptions. Inadequate representation space or description language as well as errors in training examples can make learning problems be difficult. One of the main problems in ML is the Feature Subset Selection - FSS - problem, i.e. the learning algorithm is faced with the problem of selecting some subset of features upon which to focus its attention, while ignoring the rest. There are a variety of reasons that justify doing FSS. The first reason that can be pointed out is that most of the ML algorithms, that are computationally feasible, do not work well in the presence of a very large number of features. This means that FSS can improve the accuracy of the classifiers generated by these algorithms. Another reason to use FSS is that it can improve comprehensibility, i.e. the human ability of understanding the data and the rules generated by symbolic ML algorithms. A third reason for doing FSS is the high cost in some domains for collecting data. Finally, FSS can reduce the cost of processing huge quantities of data. Basically, there are three approaches in Machine Learning for FSS: embedded, filter and wrapper approaches. On the other hand, if the provided features for describing the training examples are inadequate, the learning algorithms are likely to create excessively complex and inaccurate descriptions. These individually inadequate features can sometimes be combined conveniently, generating new features which can turn out to be highly representative to the description of the concept. The process of constructing new features is called Constructive Induction - CI. Is this work we focus on the filter and wrapper approaches for FSS as well as Knowledge-driven CI. We describe a series of experiments for FSS and CI, performed on four natural datasets using several symbolic ML algorithms. For each dataset, various measures are taken to compare the inducers performance, for example accuracy, time taken to run the inducers and number of selected features by each evaluated induction algorithm. Several experiments using three real world datasets are also described. The focus of these three case studies is not only comparing the induction algorithms performance, but also the evaluation of the extracted knowledge. During the knowledge extraction step results were presented to the specialist, who gave many suggestions for the development of further experiments. Some of the knowledge extracted from these three real world datasets were found very interesting by the specialist. This shows that the interaction between different areas, in this case, medical and computational areas, may produce interesting results. Thus, two groups of researchers need to be put together if the application of ML is to bear fruit: those that are acquainted with the existing ML methods, and those with expertise in the given application domain to provide training data.
37

Agile Prototyping : A combination of different approaches into one main process

Abu Baker, Mohamed January 2009 (has links)
<p>Software prototyping is considered to be one of the most important tools that are used by software engineersnowadays to be able to understand the customer’s requirements, and develop software products that are efficient,reliable, and acceptable economically. Software engineers can choose any of the available prototyping approaches tobe used, based on the software that they intend to develop and how fast they would like to go during the softwaredevelopment. But generally speaking all prototyping approaches are aimed to help the engineers to understand thecustomer’s true needs, examine different software solutions and quality aspect, verification activities…etc, that mightaffect the quality of the software underdevelopment, as well as avoiding any potential development risks.A combination of several prototyping approaches, and brainstorming techniques which have fulfilled the aim of theknowledge extraction approach, have resulted in developing a prototyping approach that the engineers will use todevelop one and only one throwaway prototype to extract more knowledge than expected, in order to improve thequality of the software underdevelopment by spending more time studying it from different points of view.The knowledge extraction approach, then, was applied to the developed prototyping approach in which thedeveloped model was treated as software prototype, in order to gain more knowledge out of it. This activity hasresulted in several points of view, and improvements that were implemented to the developed model and as a resultAgile Prototyping AP, was developed. AP integrated more development approaches to the first developedprototyping model, such as: agile, documentation, software configuration management, and fractional factorialdesign, in which the main aim of developing one, and only one prototype, to help the engineers gaining moreknowledge, and reducing effort, time, and cost of development was accomplished but still developing softwareproducts with satisfying quality is done by developing an evolutionary prototyping and building throwawayprototypes on top of it.</p>
38

Partager le savoir du lexicographe: extraction et modélisation ontologique des savoirs lexicographiques

Comeau, Sophie 12 1900 (has links)
Cette recherche porte sur la lexicologie, la lexicographie et l’enseignement/apprentissage du lexique. Elle s’inscrit dans le cadre du projet Modélisation ontologique des savoirs lexicographiques en vue de leur application en linguistique appliquée, surnommé Lexitation, qui est, à notre connaissance, la première tentative d’extraction des savoirs lexicographiques — i.e. connaissances déclaratives et procédurales utilisées par des lexicographes — utilisant une méthode expérimentale. Le projet repose sur le constat que les savoirs lexicographiques ont un rôle crucial à jouer en lexicologie, mais aussi en enseignement/apprentissage du lexique. Dans ce mémoire, nous décrirons les méthodes et les résultats de nos premières expérimentations, effectuées à l’aide du Think Aloud Protocol (Ericsson et Simon, 1993). Nous expliquerons l’organisation générale des expérimentations et comment les savoirs lexicographiques extraits sont modélisés pour former une ontologie. Finalement, nous discuterons des applications possibles de nos travaux en enseignement du lexique, plus particulièrement pour la formation des maîtres. / This research is about lexicology, lexicography and vocabulary teaching/learning. It is part of a project called Ontologization of lexicographic abilites for use in the fields of applied linguistics, nicknamed Lexitation, which is, to our knowledge, the first attempt at extracting lexicographic abilities using experimental techniques. The project relies on the assumption that lexicographic abilities play a role in teaching and acquisition of lexical knowledge, and not only in lexicography per se. We will describe the methods and results of our initial set of experiments, that are based on the use of so-called Think Aloud Protocol (Ericsson et Simon, 1993). We will explain how experiments have been set up and how we are currently proceeding with the extraction and modeling of various types of knowledge and strategies used by lexicographers while performing lexicographic tasks. Finally, we will present possible applications of our work in the field of language teaching, more specifically, teachers’ training.
39

Verslo žinių išgavimo iš egzistuojančių programų sistemų tyrimas / Business Knowledge Extraction from Existing Software Systems

Normantas, Kęstutis 16 January 2014 (has links)
Darbe nagrinėjama programų sistemų palaikymo ir vystymo problema. Nustatyta, jog sąnaudos šiose programų sistemos gyvavimo ciklo fazėse siekia iki 80% visų sąnaudų, skiriamų programų sistemai kurti. Pagrindinis šio reiškinio veiksnys yra nuolatinis poreikis pritaikyti sistemų funkcionalumą prie besikeičiančių verslo reikalavimų, o tokios užduotys apima didžiąją dalį visų palaikymo veiklų. Nagrinėti tyrimai parodė, kad programų sistemose įgyvendintai verslo logikai suprasti sugaištama 40–60% pakeitimams atliki skirto laiko, kadangi atsakingi už sistemų palaikymą žmonės paprastai nėra jų projektuotai, todėl turi dėti dideles pastangas, kad išsiaiškintų sistemos veikimo principus. Be to, pakeitimai, atliekami palaikymo metu, yra retai dokumentuojami (ar net nedokumentuojami visai), o supratimas įgytas įgyvendinant pakeitimus lieka individualių programuotojų galvose. Tuo tarpu kiti tyrimai atskleidė, jog paprastai tik trečdalis programų sistemos kodo įgyvendina verslo logiką, o kita dalis yra skirta platformos ir infrastruktūros funkcijoms įgyvendinti. Iš to darytina išvada, jog išgaunant dalykinės srities žinias bei išlaikant atsekamumą tarp jų ir jas įgyvendinančio programinio kodo, galima sumažinti sistemų palaikymo ir vystymo kaštus. Todėl pagrindinis šio darbo tikslas yra patobulinti verslo žinių išgavimo ir vaizdavimo procesą, pasiūlant metodą ir palaikančias priemones, kurios palengvintų egzistuojančių programų sistemų suvokimą. Darbas susideda iš įvado, 4 dalių, bendrųjų... [toliau žr. visą tekstą] / The dissertation addresses the problem of software maintenance and evolution. It identifies that spending within these software lifecycle phases may account for up to 80% of software’s total lifecycle cost, whereas the inability to adopt software quickly and reliably to meet ever-changing business requirements may lead to business opportunities being lost. The main reason of this phenomenon is the fact that the most of maintenance effort is devoted to understanding the software to be modified. On the other hand, related studies show that less than one-third of software source code contains business logic implemented within it, while the remaining part is intended for platform or infrastructure relevant activities. It follows that if the most of changes in software are made due to the need to adopt its functionality to changed business requirements, then facilitating software comprehension with automated business knowledge extraction methods may significantly reduce the cost of software maintenance and evolution. Therefore the main goal of this thesis is to improve business knowledge extraction process by proposing a method and supporting tool framework that would facilitate comprehension of existing software systems. The dissertation consists of the following parts: Introduction, 4 chapters, General Conclusions, References, and 6 Annexes. Chapter 1 presents a systematic literature review of related studies in order to summarize the state-of-the art in this research field... [to full text]
40

Aplicações de sistemas multiagentes na previsão espacial de demanda elétrica em sistemas de distribuição /

Trujillo, Joel David Melo. January 2010 (has links)
Resumo: Neste trabalho apresentam-se dois métodos para serem aplicados na previsão espacial de demanda elétrica, os quais simulam as influências de cargas especiais nas vizinhanças e utilizam os sistemas multiagentes para caracterizar a área de serviço, mostrando assim, a dinâmica dos grupos sociais em uma cidade à procura dos recursos necessários para suas atividades. O primeiro sistema multiagente foi desenvolvido para obter a previsão espacial de demanda elétrica de toda área de serviço e o segundo sistema multiagente modela a influência de cargas especiais nas vizinhanças. Estes sistemas apresentam um caráter estocástico, para simular a estocasticidade dos usuários nos sistemas de distribuição. Os métodos apresentados consideram a disponibilidade atual de dados nas empresas do setor, usando só o banco de dados comercial da empresa de serviço elétrico e o conjunto de dados georreferenciados dos elementos da rede. Uma das contribuições deste trabalho é de utilizar um número real para representar a demanda elétrica esperada de cada subárea fornecendo, deste modo, um melhor dado de entrada para realizar o planejamento de expansão da rede elétrica. A metodologia proposta foi testada em um sistema real de uma cidade de médio porte. Como resultados são gerados mapas de cenários futuros de previsão espacial de demanda para a área de estudo, que mostram a localização espaço-temporal das novas cargas. Cada mapa mostra as subáreas onde a nova demanda é esperada, com um número real para o valor da quantidade desta demanda. Os resultados obtidos variam entre 5 a 10 % em diferentes simulações, quando comparadas com as fornecidas pelo departamento de planejamento da empresa elétrica que aplica uma metodologia manual, que utiliza o conhecimento e as decisões do planejador para determinar o crescimento da demanda. / Abstract: This paper presents two methods to be applied in the spatial electric load forecasting, which simulate the influences of special loads in the vicinity and use the multi-agent systems to characterize the service area, thus showing the dynamics of social groups in a city seeking the necessary resources for their activities. The first multi-agent system was developed for the spatial electric load forecasting of the entire service area and the second multi-agent system models the influence of special load in the vicinity. These systems have a stochastic character, to simulate the stochasticity of users in distribution systems. The method presented in this work considers that the utilities have access only to basic information, using only the commercial consumer database and georeferenced data set of the network elements. One of the contributions of this work is to use a real number to represent the expected demand in each subarea providing thus a better input data to perform the expansion planning of the distribution systems grid. The proposed methodology was tested in a real system of a midsize city. As results are generated maps of forecast future scenarios of spatial demand for the study area, showing the location of the new space-time loads. Each map shows the subareas where the new demand is expected, with a real number to the value of the quantity of demand. The results vary between 5 to 10% in different simulations, when compared with those provided by the planning department electrical distribution utility that applies an electric manual, which uses the knowledge and decisions of the planner to determine the growth of demand. / Orientador: Antonio Padilha Feltrin / Coorientador: Edgar Manuel Carreño Franco / Banca: Carlos Roberto Minussi / Banca: Sérgio Luís Haffner / Mestre

Page generated in 0.1079 seconds