• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 933
  • 156
  • 74
  • 55
  • 27
  • 23
  • 18
  • 13
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 4
  • Tagged with
  • 1608
  • 1608
  • 1608
  • 623
  • 567
  • 465
  • 384
  • 376
  • 269
  • 256
  • 245
  • 230
  • 221
  • 208
  • 204
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1131

Automatická korektura chyb ve výstupu strojového překladu / Automatic Error Correction of Machine Translation Output

Variš, Dušan January 2016 (has links)
We present MLFix, an automatic statistical post-editing system, which is a spiritual successor to the rule- based system, Depfix. The aim of this thesis was to investigate the possible approaches to automatic identification of the most common morphological errors produced by the state-of-the-art machine translation systems and to train sufficient statistical models built on the acquired knowledge. We performed both automatic and manual evaluation of the system and compared the results with Depfix. The system was mainly developed on the English-to- Czech machine translation output, however, the aim was to generalize the post-editing process so it can be applied to other language pairs. We modified the original pipeline to post-edit English-German machine translation output and performed additional evaluation of this modification. Powered by TCPDF (www.tcpdf.org)
1132

Zodpovídání dotazů o obrázcích / Visual Question Answering

Hajič, Jakub January 2017 (has links)
Visual Question Answering (VQA) is a recently proposed multimodal task in the general area of machine learning. The input to this task consists of a single image and an associated natural language question, and the output is the answer to that question. In this thesis we propose two incremental modifications to an existing model which won the VQA Challenge in 2016 using multimodal compact bilinear pooling (MCB), a novel way of combining modalities. First, we added the language attention mechanism, and on top of that we introduce an image attention mechanism focusing on objects detected in the image ("region attention"). We also experiment with ways of combining these in a single end- to-end model. The thesis describes the MCB model and our extensions and their two different implementations, and evaluates them on the original VQA challenge dataset for direct comparison with the original work. 1
1133

Extrakce relací v policejních záznamech / Relation extraction in police records

Ejem, Richard January 2017 (has links)
This work describes a problem of relation extraction between named entities on the sentence level, assuming that the named entities are already tagged in the text, on the domain of police reports written by the Anti-drug Department of the Police of the Czech Republic. We have used various methods of machine learning in combination with tree kernel functions and methods based on sentence syntax rules. None of the used methods had satisfying results on the data provided by the Police of the Czech Republic. Following analysis showed that tagging of the relations in the data was missing many relations, which were obvious to a human reader. That was found to be the reason why the supervised machine learning was not successful. Later in this work we present several rules for recognizing relations which we have identified manually. Findings in this work may be helpful for future research of processing these police reports.
1134

[en] A MACHINE LEARNING APPROACH FOR PORTUGUESE TEXT CHUNKING / [pt] UMA ABORDAGEM DE APRENDIZADO DE MÁQUINA PARA SEGMENTAÇÃO TEXTUAL NO PORTUGUÊS

GUILHERME CARLOS DE NAPOLI FERREIRA 10 February 2017 (has links)
[pt] A segmentação textual é uma tarefa de Processamento de Linguagem Natural muito relevante, e consiste na divisão de uma sentença em sequências disjuntas de palavras sintaticamente relacionadas. Um dos fatores que contribuem fortemente para sua importância é que seus resultados são usados como significativos dados de entrada para problemas linguísticos mais complexos. Dentre esses problemas estão a análise sintática completa, a identificação de orações, a análise sintática de dependência, a identificação de papéis semânticos e a tradução automática. Em particular, abordagens de Aprendizado de Máquina para estas tarefas beneficiam-se intensamente com o uso de um atributo de segmentos textuais. Um número respeitável de eficazes estratégias de extração de segmentos para o inglês foi apresentado ao longo dos últimos anos. No entanto, até onde podemos determinar, nenhum estudo abrangente foi feito sobre a segmentação textual para o português, de modo a demonstrar seus benefícios. O escopo deste trabalho é a língua portuguesa, e seus objetivos são dois. Primeiramente, analisamos o impacto de diferentes definições de segmentação, utilizando uma heurística para gerar segmentos que depende de uma análise sintática completa previamente anotada. Em seguida, propomos modelos de Aprendizado de Máquina para a extração de segmentos textuais baseados na técnica Aprendizado de Transformações Guiado por Entropia. Fazemos uso do corpus Bosque, do projeto Floresta Sintá(c)tica, nos nossos experimentos. Utilizando os valores determinados diretamente por nossa heurística, um atributo de segmentos textuais aumenta a métrica F beta igual 1 de um sistema de identificação de orações para o português em 6.85 e a acurácia de um sistema de análise sintática de dependência em 1.54. Ademais, nosso melhor extrator de segmentos apresenta um F beta igual 1 de 87.95 usando anotaçoes automáticas de categoria gramatical. As descobertas indicam que, de fato, a informação de segmentação textual derivada por nossa heurística é relevante para tarefas mais elaboradas cujo foco é o português. Além disso, a eficácia de nossos extratores é comparável à dos similares do estado-da-arte para o inglês, tendo em vista que os modelos propostos são razoavelmente simples. / [en] Text chunking is a very relevant Natural Language Processing task, and consists in dividing a sentence into disjoint sequences of syntactically correlated words. One of the factors that highly contribute to its importance is that its results are used as a significant input to more complex linguistic problems. Among those problems we have full parsing, clause identification, dependency parsing, semantic role labeling and machine translation. In particular, Machine Learning approaches to these tasks greatly benefit from the use of a chunk feature. A respectable number of effective chunk extraction strategies for the English language has been presented during the last few years. However, as far as we know, no comprehensive study has been done on text chunking for Portuguese, showing its benefits. The scope of this work is the Portuguese language, and its objective is twofold. First, we analyze the impact of different chunk definitions, using a heuristic to generate chunks that relies on previous full parsing annotation. Then, we propose Machine Learning models for chunk extraction based on the Entropy Guided Transformation Learning technique. We employ the Bosque corpus, from the Floresta Sintá(c)tica project, for our experiments. Using golden values determined by our heuristic, a chunk feature improves the F beta equal 1 score of a clause identification system for Portuguese by 6.85 and the accuracy of a dependency parsing system by 1.54. Moreover, our best chunk extractor achieves a F beta equal 1 of 87.95 when automatic part-of-speech tags are applied. The empirical findings indicate that, indeed, chunk information derived by our heuristic is relevant to more elaborate tasks targeted on Portuguese. Furthermore, the effectiveness of our extractors is comparable to the state-of-the-art similars for English, taking into account that our proposed models are reasonably simple.
1135

[en] DEEP ARCHITECTURE FOR QUOTATION EXTRACTION / [pt] ARQUITETURA PROFUNDA PARA EXTRAÇÃO DE CITAÇÕES

LUIS FELIPE MULLER DE OLIVEIRA HENRIQUES 28 July 2017 (has links)
[pt] A Extração e Atribuição de Citações é a tarefa de identificar citações de um texto e associá-las a seus autores. Neste trabalho, apresentamos um sistema de Extração e Atribuição de Citações para a língua portuguesa. A tarefa de Extração e Atribuição de Citações foi abordada anteriormente utilizando diversas técnicas e para uma variedade de linguagens e datasets. Os modelos tradicionais para a tarefa consistem em extrair manualmente um rico conjunto de atributos e usá-los para alimentar um classificador raso. Neste trabalho, ao contrário da abordagem tradicional, evitamos usar atributos projetados à mão, usando técnicas de aprendizagem não supervisionadas e redes neurais profundas para automaticamente aprender atributos relevantes para resolver a tarefa. Ao evitar a criação manual de atributos, nosso modelo de aprendizagem de máquina tornou-se facilmente adaptável a outros domínios e linguagens. Nosso modelo foi treinado e avaliado no corpus GloboQuotes e sua métrica de desempenho F1 é igual a 89.43 por cento. / [en] Quotation Extraction and Attribution is the task of identifying quotations from a given text and associating them to their authors. In this work, we present a Quotation Extraction and Attribution system for the Portuguese language. The Quotation Extraction and Attribution task has been previously approached using various techniques and for a variety of languages and datasets. Traditional models to this task consist of extracting a rich set of hand-designed features and using them to feed a shallow classifier. In this work, unlike the traditional approach, we avoid using hand-designed features using unsupervised learning techniques and deep neural networks to automatically learn relevant features to solve the task. By avoiding design features by hand, our machine learning model became easily adaptable to other languages and domains. Our model is trained and evaluated at the GloboQuotes corpus, and its F1 performance metric is equal to 89.43 percent.
1136

Le web social et le web sémantique pour la recommandation de ressources pédagogiques / Social Web and semantic Web for recommendation in e-learning

Ghenname, Mérième 02 December 2015 (has links)
Ce travail de recherche est conjointement effectué dans le cadre d’une cotutelle entre deux universités : en France l’Université Jean Monnet de Saint-Etienne, laboratoire Hubert Curien sous la supervision de Mme Frédérique Laforest, M. Christophe Gravier et M. Julien Subercaze, et au Maroc l’Université Mohamed V de Rabat, équipe LeRMA sous la supervision de Mme Rachida Ajhoun et Mme Mounia Abik. Les connaissances et les apprentissages sont des préoccupations majeures dans la société d’aujourd’hui. Les technologies de l’apprentissage humain visent à promouvoir, stimuler, soutenir et valider le processus d’apprentissage. Notre approche explore les opportunités soulevées en faisant coopérer le Web Social et le Web sémantique pour le e-learning. Plus précisément, nous travaillons sur l’enrichissement des profils des apprenants en fonction de leurs activités sur le Web Social. Le Web social peut être une source d’information très importante à explorer, car il implique les utilisateurs dans le monde de l’information et leur donne la possibilité de participer à la construction et à la diffusion de connaissances. Nous nous focalisons sur le suivi des différents types de contributions, dans les activités de collaboration spontanée des apprenants sur les réseaux sociaux. Le profil de l’apprenant est non seulement basé sur la connaissance extraite de ses activités sur le système de e-learning, mais aussi de ses nombreuses activités sur les réseaux sociaux. En particulier, nous proposons une méthodologie pour exploiter les hashtags contenus dans les écrits des utilisateurs pour la génération automatique des intérêts des apprenants dans le but d’enrichir leurs profils. Cependant les hashtags nécessitent un certain traitement avant d’être source de connaissances sur les intérêts des utilisateurs. Nous avons défini une méthode pour identifier la sémantique de hashtags et les relations sémantiques entre les significations des différents hashtags. Par ailleurs, nous avons défini le concept de Folksionary, comme un dictionnaire de hashtags qui pour chaque hashtag regroupe ses définitions en unités de sens. Les hashtags enrichis en sémantique sont donc utilisés pour nourrir le profil de l’apprenant de manière à personnaliser les recommandations sur le matériel d’apprentissage. L’objectif est de construire une représentation sémantique des activités et des intérêts des apprenants sur les réseaux sociaux afin d’enrichir leurs profils. Nous présentons également notre approche générale de recommandation multidimensionnelle dans un environnement d’e-learning. Nous avons conçu une approche fondée sur trois types de filtrage : le filtrage personnalisé à base du profil de l’apprenant, le filtrage social à partir des activités de l’apprenant sur les réseaux sociaux, et le filtrage local à partir des statistiques d’interaction de l’apprenant avec le système. Notre implémentation s’est focalisée sur la recommandation personnalisée / This work has been jointly supervised by U. Jean Monnet Saint Etienne, in the Hubert Curien Lab (Frederique Laforest, Christophe Gravier, Julien Subercaze) and U. Mohamed V Rabat, LeRMA ENSIAS (Rachida Ahjoun, Mounia Abik). Knowledge, education and learning are major concerns in today’s society. The technologies for human learning aim to promote, stimulate, support and validate the learning process. Our approach explores the opportunities raised by mixing the Social Web and the Semantic Web technologies for e-learning. More precisely, we work on discovering learners profiles from their activities on the social web. The Social Web can be a source of information, as it involves users in the information world and gives them the ability to participate in the construction and dissemination of knowledge. We focused our attention on tracking the different types of contributions, activities and conversations in learners spontaneous collaborative activities on social networks. The learner profile is not only based on the knowledge extracted from his/her activities on the e-learning system, but also from his/her many activities on social networks. We propose a methodology for exploiting hashtags contained in users’ writings for the automatic generation of learner’s semantic profiles. Hashtags require some processing before being source of knowledge on the user interests. We have defined a method to identify semantics of hashtags and semantic relationships between the meanings of different hashtags. By the way, we have defined the concept of Folksionary, as a hashtags dictionary that for each hashtag clusters its definitions into meanings. Semantized hashtags are thus used to feed the learner’s profile so as to personalize recommendations on learning material. The goal is to build a semantic representation of the activities and interests of learners on social networks in order to enrich their profiles. We also discuss our recommendation approach based on three types of filtering (personalized, social, and statistical interactions with the system). We focus on personalized recommendation of pedagogical resources to the learner according to his/her expectations and profile
1137

A Research Bed For Unit Selection Based Text To Speech Synthesis System

Konakanchi, Parthasarathy 02 1900 (has links) (PDF)
After trying Festival Speech Synthesis System, we decided to develop our own TTS framework, conducive to perform the necessary research experiments for developing good quality TTS for Indian languages. In most of the attempts on Indian language TTS, there is no prosody model, provision for handling foreign language words and no phrase break prediction leading to the possibility of introducing appropriate pauses in the synthesized speech. Further, in the Indian context, there is a real felt need for a bilingual TTS, involving English, along with the Indian language. In fact, it may be desirable to also have a trilingual TTS, which can also take care of the language of the neighboring state or Hindi, in addition. Thus, there is a felt need for a full-fledged TTS development framework, which lends itself for experimentation involving all the above issues and more. This thesis work is therefore such a serious attempt to develop a modular, unit selection based TTS framework. The developed system has been tested for its effectiveness to create intelligible speech in Tamil and Kannada. The created system has also been used to carry out two research experiments on TTS. The first part of the work is the design and development of corpus-based concatenative Tamil speech synthesizer in Matlab and C. A synthesis database has been created with 1027 phonetically rich, pre-recorded sentences, segmented at the phone level. From the sentence to be synthesized, specifications of the required target units are predicted. During synthesis, database units are selected that best match the target specification according to a distance metric and a concatenation quality metric. To accelerate matching, the features of the end frames of the database units have been precomputed and stored. The selected units are concatenated to produce synthetic speech. The high values of the obtained mean opinion scores for the TTS output reveal that speech synthesized using our TTS is intelligible and acceptably natural and can possibly be put to commercial use with some additional features. Experiments carried out by others using my TTS framework have shown that, whenever the required phonetic context is not available in the synthesis database., similar phones that are perceptually indistinguishable may be substituted. The second part of the work deals with the design and modification of the developed TTS framework to be embedded in mobile phones. Commercial GSM FR, EFR and AMR speech codecs are used for compressing our synthesis database. Perception experiments reveal that speech synthesized using a highly compressed database is reasonably natural. This holds promise in the future to read SMSs and emails on mobile phones in Indian languages. Finally, we observe that incorporating prosody and pause models for Indian language TTS would further enhance the quality of the synthetic speech. These are some of the potential, unexplored areas ahead, for research in speech synthesis in Indian languages.
1138

Automatic Text Ontological Representation and Classification via Fundamental to Specific Conceptual Elements (TOR-FUSE)

Razavi, Amir Hossein January 2012 (has links)
In this dissertation, we introduce a novel text representation method mainly used for text classification purpose. The presented representation method is initially based on a variety of closeness relationships between pairs of words in text passages within the entire corpus. This representation is then used as the basis for our multi-level lightweight ontological representation method (TOR-FUSE), in which documents are represented based on their contexts and the goal of the learning task. The method is unlike the traditional representation methods, in which all the documents are represented solely based on the constituent words of the documents, and are totally isolated from the goal that they are represented for. We believe choosing the correct granularity of representation features is an important aspect of text classification. Interpreting data in a more general dimensional space, with fewer dimensions, can convey more discriminative knowledge and decrease the level of learning perplexity. The multi-level model allows data interpretation in a more conceptual space, rather than only containing scattered words occurring in texts. It aims to perform the extraction of the knowledge tailored for the classification task by automatic creation of a lightweight ontological hierarchy of representations. In the last step, we will train a tailored ensemble learner over a stack of representations at different conceptual granularities. The final result is a mapping and a weighting of the targeted concept of the original learning task, over a stack of representations and granular conceptual elements of its different levels (hierarchical mapping instead of linear mapping over a vector). Finally the entire algorithm is applied to a variety of general text classification tasks, and the performance is evaluated in comparison with well-known algorithms.
1139

Huvudtitel: Understand and Utilise Unformatted Text Documents by Natural Language Processing algorithms

Lindén, Johannes January 2017 (has links)
News companies have a need to automate and make the editors process of writing about hot and new events more effective. Current technologies involve robotic programs that fills in values in templates and website listeners that notifies the editors when changes are made so that the editor can read up on the source change at the actual website. Editors can provide news faster and better if directly provided with abstracts of the external sources. This study applies deep learning algorithms to automatically formulate abstracts and tag sources with appropriate tags based on the context. The study is a full stack solution, which manages both the editors need for speed and the training, testing and validation of the algorithms. Decision Tree, Random Forest, Multi Layer Perceptron and phrase document vectors are used to evaluate the categorisation and Recurrent Neural Networks is used to paraphrase unformatted texts. In the evaluation a comparison between different models trained by the algorithms with a variation of parameters are done based on the F-score. The results shows that the F-scores are increasing the more document the training has and decreasing the more categories the algorithm needs to consider. The Multi-Layer Perceptron perform best followed by Random Forest and finally Decision Tree. The document length matters, when larger documents are considered during training the score is increasing considerably. A user survey about the paraphrase algorithms shows the paraphrase result is insufficient to satisfy editors need. It confirms a need for more memory to conduct longer experiments.
1140

Locating Information in Heterogeneous log files / Localisation d'information dans les fichiers logs hétérogènes

Saneifar, Hassan 02 December 2011 (has links)
Cette thèse s'inscrit dans les domaines des systèmes Question Réponse en domaine restreint, la recherche d'information ainsi que TALN. Les systèmes de Question Réponse (QR) ont pour objectif de retrouver un fragment pertinent d'un document qui pourrait être considéré comme la meilleure réponse concise possible à une question de l'utilisateur. Le but de cette thèse est de proposer une approche de localisation de réponses dans des masses de données complexes et évolutives décrites ci-dessous.. De nos jours, dans de nombreux domaines d'application, les systèmes informatiques sont instrumentés pour produire des rapports d'événements survenant, dans un format de données textuelles généralement appelé fichiers log. Les fichiers logs représentent la source principale d'informations sur l'état des systèmes, des produits, ou encore les causes de problèmes qui peuvent survenir. Les fichiers logs peuvent également inclure des données sur les paramètres critiques, les sorties de capteurs, ou une combinaison de ceux-ci. Ces fichiers sont également utilisés lors des différentes étapes du développement de logiciels, principalement dans l'objectif de débogage et le profilage. Les fichiers logs sont devenus un élément standard et essentiel de toutes les grandes applications. Bien que le processus de génération de fichiers logs est assez simple et direct, l'analyse de fichiers logs pourrait être une tâche difficile qui exige d'énormes ressources de calcul, de temps et de procédures sophistiquées. En effet, il existe de nombreux types de fichiers logs générés dans certains domaines d'application qui ne sont pas systématiquement exploités d'une manière efficace en raison de leurs caractéristiques particulières. Dans cette thèse, nous nous concentrerons sur un type des fichiers logs générés par des systèmes EDA (Electronic Design Automation). Ces fichiers logs contiennent des informations sur la configuration et la conception des Circuits Intégrés (CI) ainsi que les tests de vérification effectués sur eux. Ces informations, très peu exploitées actuellement, sont particulièrement attractives et intéressantes pour la gestion de conception, la surveillance et surtout la vérification de la qualité de conception. Cependant, la complexité de ces données textuelles complexes, c.-à-d. des fichiers logs générés par des outils de conception de CI, rend difficile l'exploitation de ces connaissances. Plusieurs aspects de ces fichiers logs ont été moins soulignés dans les méthodes de TALN et Extraction d'Information (EI). Le grand volume de données et leurs caractéristiques particulières limitent la pertinence des méthodes classiques de TALN et EI. Dans ce projet de recherche nous cherchons à proposer une approche qui permet de répondre à répondre automatiquement aux questionnaires de vérification de qualité des CI selon les informations se trouvant dans les fichiers logs générés par les outils de conception. Au sein de cette thèse, nous étudions principalement "comment les spécificités de fichiers logs peuvent influencer l'extraction de l'information et les méthodes de TALN?". Le problème est accentué lorsque nous devons également prendre leurs structures évolutives et leur vocabulaire spécifique en compte. Dans ce contexte, un défi clé est de fournir des approches qui prennent les spécificités des fichiers logs en compte tout en considérant les enjeux qui sont spécifiques aux systèmes QR dans des domaines restreints. Ainsi, les contributions de cette thèse consistent brièvement en :〉Proposer une méthode d'identification et de reconnaissance automatique des unités logiques dans les fichiers logs afin d'effectuer une segmentation textuelle selon la structure des fichiers. Au sein de cette approche, nous proposons un type original de descripteur qui permet de modéliser la structure textuelle et le layout des documents textuels.〉Proposer une approche de la localisation de réponse (recherche de passages) dans les fichiers logs. Afin d'améliorer la performance de recherche de passage ainsi que surmonter certains problématiques dûs aux caractéristiques des fichiers logs, nous proposons une approches d'enrichissement de requêtes. Cette approches, fondée sur la notion de relevance feedback, consiste en un processus d'apprentissage et une méthode de pondération des mots pertinents du contexte qui sont susceptibles d'exister dans les passage adaptés. Cela dit, nous proposons également une nouvelle fonction originale de pondération (scoring), appelée TRQ (Term Relatedness to Query) qui a pour objectif de donner un poids élevé aux termes qui ont une probabilité importante de faire partie des passages pertinents. Cette approche est également adaptée et évaluée dans les domaines généraux.〉Etudier l'utilisation des connaissances morpho-syntaxiques au sein de nos approches. A cette fin, nous nous sommes intéressés à l'extraction de la terminologie dans les fichiers logs. Ainsi, nous proposons la méthode Exterlog, adaptée aux spécificités des logs, qui permet d'extraire des termes selon des patrons syntaxiques. Afin d'évaluer les termes extraits et en choisir les plus pertinents, nous proposons un protocole de validation automatique des termes qui utilise une mesure fondée sur le Web associée à des mesures statistiques, tout en prenant en compte le contexte spécialisé des logs. / In this thesis, we present contributions to the challenging issues which are encounteredin question answering and locating information in complex textual data, like log files. Question answering systems (QAS) aim to find a relevant fragment of a document which could be regarded as the best possible concise answer for a question given by a user. In this work, we are looking to propose a complete solution to locate information in a special kind of textual data, i.e., log files generated by EDA design tools.Nowadays, in many application areas, modern computing systems are instrumented to generate huge reports about occurring events in the format of log files. Log files are generated in every computing field to report the status of systems, products, or even causes of problems that can occur. Log files may also include data about critical parameters, sensor outputs, or a combination of those. Analyzing log files, as an attractive approach for automatic system management and monitoring, has been enjoying a growing amount of attention [Li et al., 2005]. Although the process of generating log files is quite simple and straightforward, log file analysis could be a tremendous task that requires enormous computational resources, long time and sophisticated procedures [Valdman, 2004]. Indeed, there are many kinds of log files generated in some application domains which are not systematically exploited in an efficient way because of their special characteristics. In this thesis, we are mainly interested in log files generated by Electronic Design Automation (EDA) systems. Electronic design automation is a category of software tools for designing electronic systems such as printed circuit boards and Integrated Circuits (IC). In this domain, to ensure the design quality, there are some quality check rules which should be verified. Verification of these rules is principally performed by analyzing the generated log files. In the case of large designs that the design tools may generate megabytes or gigabytes of log files each day, the problem is to wade through all of this data to locate the critical information we need to verify the quality check rules. These log files typically include a substantial amount of data. Accordingly, manually locating information is a tedious and cumbersome process. Furthermore, the particular characteristics of log files, specially those generated by EDA design tools, rise significant challenges in retrieval of information from the log files. The specific features of log files limit the usefulness of manual analysis techniques and static methods. Automated analysis of such logs is complex due to their heterogeneous and evolving structures and the large non-fixed vocabulary.In this thesis, by each contribution, we answer to questions raised in this work due to the data specificities or domain requirements. We investigate throughout this work the main concern "how the specificities of log files can influence the information extraction and natural language processing methods?". In this context, a key challenge is to provide approaches that take the log file specificities into account while considering the issues which are specific to QA in restricted domains. We present different contributions as below:> Proposing a novel method to recognize and identify the logical units in the log files to perform a segmentation according to their structure. We thus propose a method to characterize complex logicalunits found in log files according to their syntactic characteristics. Within this approach, we propose an original type of descriptor to model the textual structure and layout of text documents.> Proposing an approach to locate the requested information in the log files based on passage retrieval. To improve the performance of passage retrieval, we propose a novel query expansion approach to adapt an initial query to all types of corresponding log files and overcome the difficulties like mismatch vocabularies. Our query expansion approach relies on two relevance feedback steps. In the first one, we determine the explicit relevance feedback by identifying the context of questions. The second phase consists of a novel type of pseudo relevance feedback. Our method is based on a new term weighting function, called TRQ (Term Relatedness to Query), introduced in this work, which gives a score to terms of corpus according to their relatedness to the query. We also investigate how to apply our query expansion approach to documents from general domains.> Studying the use of morpho-syntactic knowledge in our approaches. For this purpose, we are interested in the extraction of terminology in the log files. Thus, we here introduce our approach, named Exterlog (EXtraction of TERminology from LOGs), to extract the terminology of log files. To evaluate the extracted terms and choose the most relevant ones, we propose a candidate term evaluation method using a measure, based on the Web and combined with statistical measures, taking into account the context of log files.

Page generated in 0.3688 seconds