Spelling suggestions: "subject:"named entities"" "subject:"famed entities""
1 |
Příprava vyhodnocovací sady pro složité problémy rozpoznávání a zjednoznačňování pojmenovaných entit pomocí crowdsourcingu / Preparing Evaluation Set for Complex Problems of Recognition and Disambiguation of Named Entities through CrowdsourcingPastorek, Peter January 2019 (has links)
This Master's Thesis prepares Evaluation Set for Problems of Recognition and Disambiguation of Named Entities. Evaluation Set is created using Automatization and Crowdsourcing. Evaluation Set can be used in testing Edge Cases in Recognition and Disambiguation of Named Entities.
|
2 |
Topic Segmentation and Medical Named Entities Recognition for Pictorially Visualizing Health Record Summary SystemRuan, Wei 03 April 2019 (has links)
Medical Information Visualization makes optimized use of digitized data of medical records, e.g. Electronic Medical Record. This thesis is an extended work of Pictorial Information Visualization System (PIVS) developed by Yongji Jin (Jin, 2016) Jiaren Suo (Suo, 2017) which is a graphical visualization system by picturizing patient’s medical history summary depicting patients’ medical information in order to help patients and doctors to easily capture patients’ past and present conditions. The summary information has been manually entered into the interface where the information can be taken from clinical notes.
This study proposes a methodology of automatically extracting medical information from patients’ clinical notes by using the techniques of Natural Language Processing in order to produce medical history summarization from past medical records. We develop a Named Entities Recognition system to extract the information of the medical imaging procedure (performance date, human body location, imaging results and so on) and medications (medication names, frequency and quantities) by applying the model of conditional random fields with three main features and others: word-based, part-of-speech, Metamap semantic features. Adding Metamap semantic features is a novel idea which raised the accuracy compared to previous studies. Our evaluation shows that our model has higher accuracy than others on medication extraction as a case study.
For enhancing the accuracy of entities extraction, we also propose a methodology of Topic Segmentation to clinical notes using boundary detection by determining the difference of classification probabilities of subsequence sequences, which is different from the traditional Topic Segmentation approaches such as TextTiling, TopicTiling and Beeferman Statistical Model. With Topic Segmentation combined for Named Entities Extraction, we observed higher accuracy for medication extraction compared to the case without the segmentation.
Finally, we also present a prototype of integrating our information extraction system with PIVS by simply building the database of interface coordinates and the terms of human body parts.
|
3 |
Reconnaissance des entités nommées par exploration de règles d'annotation : interpréter les marqueurs d'annotation comme instructions de structuration locale. / Named entity recognition by mining association rulesNouvel, Damien 20 November 2012 (has links)
Le développement des technologies de l'information et de la communication à modifié en profondeur la manière dont nous avons accès aux connaissances. Face à l’afflux de données et à leur diversité, il est nécessaire de meure su point des technologies performantes et robustes pour y rechercher des informations. Notre travail porte sur le reconnaissance des entités nommées et leur annotation su sein de transcriptions d’émissions radiodiffusées ou télévisuelles. En première partie, nous abordons le problématique de la reconnaissance automatique des entités nommées. Après une caractérisation de leur nature linguistique, nous proposons une approche par instructions, fondée sur les marqueurs (balises) d’annotation, qui considère ces éléments isolément (début ou fin d’une annotation). En seconde partie, nous faisons état des travaux en fouille de données et présentons un cadre formel pour explorer les données. Nous y proposons une formulation alternative par segments, qui limite la combinatoire lors de l’exploration. Les motifs corrélés à un ou plusieurs marqueurs d’annotation sont extraits comme règles d’annotation. La dernière partie décrit le cadre expérimental, quelques spécificités de l’implémentation du système (mXS) et les résultats obtenus. Nous montrons l’intérêt d’extraire largement les règles d’annotation et expérimentons les motifs de segments. Nous fournissons des résultats chiffrés relatifs aux performances du système à divers point de vue et dans diverses configurations. Ils montrent que l’approche que nous proposons est compétitive et qu’elle ouvre des perspectives dans le cadre de l’observation des langues naturelles et de l’annotation automatique. / Those latest decades, the development of information end communication technologies has deeply modified die way we access knowledge. Facing the volume end the diversity of date, it is necessary to work out robust end efficient technologies to retrieve information. The present work considers recognition and annotation of Named Entities within radio and TV broadcasts transcripts. For this purpose, we interpret die annotation task es s local structuration. We can therefore leverage data to empirically extract mies that govern annotation markers (or tags) presence. In die first part, we introduce our problematic: processing named entities. We question named entities status (related notions, typologies, evaluation end annotation) and propose properties to define their linguistic nature. We conclude this part by describing state-of-the-art approaches end by presenting our contribution, focused on markers (tags) diet begin or end an annotation. In die second part, we present die formalism used to mine date. The framework we use to enrich date, explore sequences and extract annotation rules is formalized. The lest part describes the implemented system (mXS) and the obtained results. Specific implementation details are given and results about rule extraction from data are reported. Finally, we provide quantitative results of the performance of mXS on Ester2 end Etape datasets, among with various indications about die behaviour of die system from diverse points of view and in diverse configurations. They show diet our approach gives competitive results end that it opens up new perspectives for natural language processing and automatic annotation.
|
4 |
Extracting social networks from fiction : Imaginary and invisible friends: Investigating the social world of imaginary friends.Ek, Adam January 2017 (has links)
This thesis develops an approach to extract the social relation between characters in literary text to create a social network. The approach uses co-occurrences of named entities, keywords associated with the named entities, and the dependency relations that exist between the named entities to construct the network. Literary texts contain a large amount of pronouns to represent the named entities, to resolve the antecedents of pronouns, a pronoun resolution system is implemented based on a standard pronoun resolution algorithm. The results indicate that the pronoun resolution system finds the correct named entity in 60,4\% of all cases. The social network is evaluated by comparing character importance rankings based on graph properties with an independently human generated importance rankings. The generated social networks correlate moderately to strongly with the independent character ranking.
|
5 |
Acquisition de relations entre entités nommées à partir de corpus / Corpus-based recognition of relations between named entitiesEzzat, Mani 06 May 2014 (has links)
Les entités nommées ont été l’objet de nombreuses études durant les années 1990. Leur reconnaissance dans les textes a atteint un niveau de maturité suffisante, du moins pour les principaux types (personne, organisation et lieu), pour aller plus loin dans l’analyse, vers la reconnaissance de relations entre entités. Il est par exemple intéressant de savoir qu’un texte contient des occurrences des mots « Google » et « Youtube » ; mais l’analyse devient plus intéressante si le système est capable de détecter une relation entre ces deux éléments, voire de les typer comme étant une relation d’achat (Google ayant racheté Youtube en 2006). Notre contribution s’articule autour de deux grands axes : tracer un contour plus précis autour de la définition de la relation entre entités nommées, notamment au regard de la linguistique, et explorer des techniques pour l’élaboration de systèmes d’extraction automatique qui sollicitent des linguistes. / Named entities have been the topic of many researches during the 90’s. Their detection in texts has reached a high level of performance, at least for the main categories (person, organization and location). It becomes now possible to go further, toward relation between entities recognition. For instance, knowing that a text contains the words “Google” and “Youtube” can be relevant but being able to link them and detect an acquisition relation can be more interesting (Google has bought Youtube in 2006). Our work is focusing on two different aspects: to define a finer perimeter around the relation between named entities definition, with linguistic aspect in mind, and to explore new techniques that make use of linguists in order to build a relation between named entities recognition system.
|
6 |
Rozpoznávání a propojování pojmenovaných entit / Named Entity Recognition and LinkingTaufer, Pavel January 2017 (has links)
The goal of this master thesis is to design and implement a named entity recognition and linking algorithm. A part of this goal is to propose and create a knowledge base that will be used in the algorithm. Because of the limited amount of data for languages other than English, we want to be able to train our method on one language, and then transfer the learned parameters to other languages (that do not have enough training data). The thesis consists of description of available knowledge bases, existing methods and design and implementation of our own knowledge base and entity linking method. Our method achieves state of the art result on a few variants of the AIDA CoNLL-YAGO dataset. The method also obtains comparable results on a sample of Czech annotated data from the PDT dataset using the parameters trained on the English CoNLL dataset. Powered by TCPDF (www.tcpdf.org)
|
7 |
Lze to říci jinak aneb automatické hledání parafrází / Automatic Identification of ParaphrasesOtrusina, Lubomír January 2009 (has links)
Automatic paraphrase discovery is an important task in natural language processing. Many systems use paraphrases for improve performance e.g. systems for question answering, information retrieval or document summarization. In this thesis, we explain basic concepts e.g. paraphrase or paraphrase pattern. Next we propose some methods for paraphrase discovery from various resources. Subsequently we propose an unsupervised method for discovering paraphrase from large plain text based on context and keywords between NE pairs. In the end we explain evaluation metods in paraphrase discovery area and then we evaluate our system and compare it with similar systems.
|
8 |
Direct Speech Translation Toward High-Quality, Inclusive, and Augmented SystemsGaido, Marco 28 April 2023 (has links)
When this PhD started, the translation of speech into text in a different language was mainly tackled with a cascade of automatic speech recognition (ASR) and machine translation (MT) models, as the emerging direct speech translation (ST) models were not yet competitive. To close this gap, part of the PhD has been devoted to improving the quality of direct models, both in the simplified condition of test sets where the audio is split into well-formed sentences, and in the realistic condition in which the audio is automatically segmented. First, we investigated how to transfer knowledge from MT models trained on large corpora. Then, we defined encoder architectures that give different weights to the vectors in the input sequence, reflecting the variability of the amount of information over time in speech. Finally, we reduced the adverse effects caused by the suboptimal automatic audio segmentation in two ways: on one side, we created models robust to this condition; on the other, we enhanced the audio segmentation itself. The good results achieved in terms of overall translation quality allowed us to investigate specific behaviors of direct ST systems, which are crucial to satisfy real users’ needs. On one side, driven by the ethical goal of inclusive systems, we disclosed that established technical choices geared toward high general performance (statistical word segmentation of the target text, knowledge distillation from MT) cause an exacerbation of the gender representational disparities in the training data. Along this line of work, we proposed mitigation techniques that reduce the gender bias of ST models, and showed how gender-specific systems can be used to control the translation of gendered words related to the speakers, regardless of their vocal traits. On the other side, motivated by the practical needs of interpreters and translators, we evaluated the potential of direct ST systems in the “augmented translation” scenario, focusing on the translation and recognition of named entities (NEs). Along this line of work, we proposed solutions to cope with the major weakness of ST models (handling person names), and introduced direct models that jointly perform ST and NE recognition showing their superiority over a pipeline of dedicated tools for the two tasks. Overall, we believe that this thesis moves a step forward toward adopting direct ST systems in real applications, increasing the awareness of their strengths and weaknesses compared to the traditional cascade paradigm.
|
9 |
Recherche d’entités nommées complexes sur le web : propositions pour l’extraction et pour le calcul de similarité / Retrieval of Comple Named Entities on the web : proposals for extraction and similarity computationFotsoh Tawaofaing, Armel 27 February 2018 (has links)
Les récents développements des nouvelles technologies de l’information et de la communication font du Web une véritable mine d’information. Cependant, les pages Web sont très peu structurées. Par conséquent, il est difficile pour une machine de les traiter automatiquement pour en extraire des informations pertinentes pour une tâche ciblée. C’est pourquoi les travaux de recherche s’inscrivant dans la thématique de l’Extraction d’Information dans les pages web sont en forte croissance. Aussi, l’interrogation de ces informations, généralement structurées et stockées dans des index pour répondre à des besoins d’information précis correspond à la Recherche d’Information (RI). Notre travail de thèse se situe à la croisée de ces deux thématiques. Notre objectif principal est de concevoir et de mettre en œuvre des stratégies permettant de scruter le web pour extraire des Entités Nommées (EN) complexes (EN composées de plusieurs propriétés pouvant être du texte ou d’autres EN) de type entreprise ou de type événement, par exemple. Nous proposons ensuite des services d’indexation et d’interrogation pour répondre à des besoins d’informations. Ces travaux ont été réalisés au sein de l’équipe T2I du LIUPPA, et font suite à une commande de l’entreprise Cogniteev, dont le cœur de métier est centré sur l’analyse du contenu du Web. Les problématiques visées sont, d’une part, l’extraction d’EN complexes sur le Web et, d’autre part, l’indexation et la recherche d’information intégrant ces EN complexes. Notre première contribution porte sur l’extraction d’EN complexes dans des textes. Pour cette contribution, nous prenons en compte plusieurs problèmes, notamment le contexte bruité caractérisant certaines propriétés (pour un événement par exemple, la page web correspondante peut contenir deux dates : la date de l’événement et celle de mise en vente des billets). Pour ce problème en particulier, nous introduisons un module de détection de blocs qui permet de focaliser l’extraction des propriétés sur des blocs de texte pertinents. Nos expérimentations montrent une nette amélioration des performances due à cette approche. Nous nous sommes également intéressés à l’extraction des adresses, où la principale difficulté découle du fait qu’aucun standard ne se soit réellement imposé comme modèle de référence. Nous proposons donc un modèle étendu et une approche d’extraction basée sur des patrons et des ressources libres.Notre deuxième contribution porte sur le calcul de similarité entre EN complexes. Dans l’état de l’art, ce calcul se fait généralement en deux étapes : (i) une première calcule les similarités entre propriétés et (ii) une deuxième agrège les scores obtenus pour le calcul de la similarité globale. En ce qui concerne cette première étape, nous proposons une fonction de calcul de similarité entre EN spatiale, l’une représentée par un point et l’autre par un polygone. Elle complète l’état de l’art. Notons que nos principales propositions se situent au niveau de la deuxième étape. Ainsi, nous proposons trois techniques pour l’agrégation des scores intermédiaires. Les deux premières sont basées sur la somme pondérée des scores intermédiaires (combinaison linéaire et régression logistique). La troisième exploite les arbres de décisions pour agréger les scores intermédiaires. Enfin, nous proposons une dernière approche basée sur le clustering et le modèle vectoriel de Salton pour le calcul de similarité entre EN complexes. Son originalité vient du fait qu’elle ne nécessite pas de passer par le calcul de scores de similarités intermédiaires. / Recent developments in information technologies have made the web an important data source. However, the web content is very unstructured. Therefore, it is a difficult task to automatically process this web content in order to extract relevant information. This is a reason why research work related to Information Extraction (IE) on the web are growing very quickly. Similarly, another very explored research area is the querying of information extracted on the web to answer an information need. This other research area is known as Information Retrieval (IR). Our research work is at the crossroads of both areas. The main goal of our work is to develop strategies and techniques for crawling the web in order to extract complex Named Entities (NEs) (NEs with several properties that may be text or other NEs). We then propose to index them and to query them in order to answer information needs. This work was carried out within the T2I team of the LIUPPA laboratory, in collaboration with Cogniteev, a company which core business is focused on the analysis of web content. The issues we had to deal with were the extraction of complex NEs on the web and the development of IR services supplied by the extracted data. Our first contribution is related to complex NEs extraction from text content. For this contribution, we take into consideration several problems, in particular the noisy context characterizing some properties (the web page describing an event for example, may contain more than one dates: the event’s date and the date of ticket’s sales opening). For this particular problem, we introduce a block detection module that focuses property's extraction on relevant text blocks. Our experiments show an improvement of system’s performances. We also focused on address extraction where the main issue arises from the fact that there is not a standard way for writing addresses in general and on the web in particular. We therefore propose a pattern-based approach which uses some lexicons for extracting addresses from text, regardless of proprietary resources.Our second contribution deals with similarity computation between complex NEs. In the state of the art, this similarity computation is generally performed in two steps: (i) first, similarities between properties are calculated; (ii) then the obtained similarities are aggregated to compute the overall similarity. Our main proposals focuses on the second step. We propose three techniques for aggregating property’s similarities. The first two are based on the weighted sum of these property’s similarities (simple linear combination and logistic regression). The third technique however, uses decision trees for the aggregation. Finally, we also propose a last approach based on clustering and Salton vector model. This last approach evaluates the similarity at the complex NE level without computing property’s similarities. We also propose a similarity computation function between spatial EN, one represented by a point and the other by a polygon. This completes those of the state of the art.
|
10 |
Immersion dans des documents scientifiques et techniques : unités, modèles théoriques et processus / Immersion in scientific and technical documents : units, theoretical models and processesAndreani, Vanessa 23 September 2011 (has links)
Cette thèse aborde la problématique de l'accès à l'information scientifique et technique véhiculée par de grands ensembles documentaires. Pour permettre à l'utilisateur de trouver l'information qui lui est pertinente, nous avons oeuvré à la définition d'un modèle répondant à l'exigence de souplesse de notre contexte applicatif industriel ; nous postulons pour cela la nécessité de segmenter l'information tirée des documents en plans ontologiques. Le modèle résultant permet une immersion documentaire, et ce grâce à trois types de processus complémentaires : des processus endogènes (exploitant le corpus pour analyser le corpus), exogènes (faisant appel à des ressources externes) et anthropogènes (dans lesquels les compétences de l'utilisateur sont considérées comme ressource) sont combinés. Tous concourent à l'attribution d'une place centrale à l'utilisateur dans le système, en tant qu'agent interprétant de l'information et concepteur de ses connaissances, dès lors qu'il est placé dans un contexte industriel ou spécialisé. / This thesis adresses the issue of accessing scientific and technical information conveyed by large sets of documents. To enable the user to find his own relevant information, we worked on a model meeting the requirement of flexibility imposed by our industrial application context ; to do so, we postulated the necessity of segmenting information from documents into ontological facets. The resulting model enables a documentary immersion, thanks to three types of complementary processes : endogenous processes (exploiting the corpus to analyze the corpus), exogenous processes (using external resources) and anthropogenous ones (in which the user's skills are considered as a resource) are combined. They all contribute to granting the user a fundamental role in the system, as an interpreting agent and as a knowledge creator, provided that he is placed in an industrial or specialised context.
|
Page generated in 0.0544 seconds