• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 8
  • 8
  • 6
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 102
  • 102
  • 102
  • 45
  • 38
  • 37
  • 35
  • 35
  • 31
  • 24
  • 22
  • 16
  • 16
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Extrakce strukturovaných dat z českého webu s využitím extrakčních ontologií / Extracting Structured Data from Czech Web Using Extraction Ontologies

Pouzar, Aleš January 2012 (has links)
The presented thesis deals with the task of automatic information extraction from HTML documents for two selected domains. Laptop offers are extracted from e-shops and free-published job offerings are extracted from company sites. The extraction process outputs structured data of high granularity grouped into data records, in which corresponding semantic label is assigned to each data item. The task was performed using the extraction system Ex, which combines two approaches: manually written rules and supervised machine learning algorithms. Due to the expert knowledge in the form of extraction rules the lack of training data could be overcome. The rules are independent of the specific formatting structure so that one extraction model could be used for heterogeneous set of documents. The achieved success of the extraction process in the case of laptop offers showed that extraction ontology describing one or a few product types could be combined with wrapper induction methods to automatically extract all product type offers on a web scale with minimum human effort.
92

GoPubMed: Ontology-based literature search for the life sciences

Doms, Andreas 06 January 2009 (has links)
Background: Most of our biomedical knowledge is only accessible through texts. The biomedical literature grows exponentially and PubMed comprises over 18.000.000 literature abstracts. Recently much effort has been put into the creation of biomedical ontologies which capture biomedical facts. The exploitation of ontologies to explore the scientific literature is a new area of research. Motivation: When people search, they have questions in mind. Answering questions in a domain requires the knowledge of the terminology of that domain. Classical search engines do not provide background knowledge for the presentation of search results. Ontology annotated structured databases allow for data-mining. The hypothesis is that ontology annotated literature databases allow for text-mining. The central problem is to associate scientific publications with ontological concepts. This is a prerequisite for ontology-based literature search. The question then is how to answer biomedical questions using ontologies and a literature corpus. Finally the task is to automate bibliometric analyses on an corpus of scientific publications. Approach: Recent joint efforts on automatically extracting information from free text showed that the applied methods are complementary. The idea is to employ the rich terminological and relational information stored in biomedical ontologies to markup biomedical text documents. Based on established semantic links between documents and ontology concepts the goal is to answer biomedical question on a corpus of documents. The entirely annotated literature corpus allows for the first time to automatically generate bibliometric analyses for ontological concepts, authors and institutions. Results: This work includes a novel annotation framework for free texts with ontological concepts. The framework allows to generate recognition patterns rules from the terminological and relational information in an ontology. Maximum entropy models can be trained to distinguish the meaning of ambiguous concept labels. The framework was used to develop a annotation pipeline for PubMed abstracts with 27,863 Gene Ontology concepts. The evaluation of the recognition performance yielded a precision of 79.9% and a recall of 72.7% improving the previously used algorithm by 25,7% f-measure. The evaluation was done on a manually created (by the original authors) curation corpus of 689 PubMed abstracts with 18,356 curations of concepts. Methods to reason over large amounts of documents with ontologies were developed. The ability to answer questions with the online system was shown on a set of biomedical question of the TREC Genomics Track 2006 benchmark. This work includes the first ontology-based, large scale, online available, up-to-date bibliometric analysis for topics in molecular biology represented by GO concepts. The automatic bibliometric analysis is in line with existing, but often out-dated, manual analyses. Outlook: A number of promising continuations starting from this work have been spun off. A freely available online search engine has a growing user community. A spin-off company was funded by the High-Tech Gründerfonds which commercializes the new ontology-based search paradigm. Several off-springs of GoPubMed including GoWeb (general web search), Go3R (search in replacement, reduction, refinement methods for animal experiments), GoGene (search in gene/protein databases) are developed.
93

Knowledge-Enabled Entity Extraction

Al-Olimat, Hussein S. January 2019 (has links)
No description available.
94

[en] EXTRACTING RELIABLE INFORMATION FROM LARGE COLLECTIONS OF LEGAL DECISIONS / [pt] EXTRAINDO INFORMAÇÕES CONFIÁVEIS DE GRANDES COLEÇÕES DE DECISÕES JUDICIAIS

FERNANDO ALBERTO CORREIA DOS SANTOS JUNIOR 09 June 2022 (has links)
[pt] Como uma consequência natural da digitalização do sistema judiciário brasileiro, um grande e crescente número de documentos jurídicos tornou-se disponível na internet, especialmente decisões judiciais. Como ilustração, em 2020, o Judiciário brasileiro produziu 25 milhões de decisões. Neste mesmo ano, o Supremo Tribunal Federal (STF), a mais alta corte do judiciário brasileiro, produziu 99.5 mil decisões. Alinhados a esses valores, observamos uma demanda crescente por estudos voltados para a extração e exploração do conhecimento jurídico de grandes acervos de documentos legais. Porém, ao contrário do conteúdo de textos comuns (como por exemplo, livro, notícias e postagem de blog), o texto jurídico constitui um caso particular de uso de uma linguagem altamente convencionalizada. Infelizmente, pouca atenção é dada à extração de informações em domínios especializados, como textos legais. Do ponto de vista temporal, o Judiciário é uma instituição em constante evolução, que se molda para atender às demandas da sociedade. Com isso, o nosso objetivo é propor um processo confiável de extração de informações jurídicas de grandes acervos de documentos jurídicos, tomando como base o STF e as decisões monocráticas publicadas por este tribunal nos anos entre 2000 e 2018. Para tanto, pretendemos explorar a combinação de diferentes técnicas de Processamento de Linguagem Natural (PLN) e Extração de Informação (EI) no contexto jurídico. Da PLN, pretendemos explorar as estratégias automatizadas de reconhecimento de entidades nomeadas no domínio legal. Do ponto da EI, pretendemos explorar a modelagem dinâmica de tópicos utilizando a decomposição tensorial como ferramenta para investigar mudanças no raciocinio juridico presente nas decisões ao lonfo do tempo, a partir da evolução do textos e da presença de entidades nomeadas legais. Para avaliar a confiabilidade, exploramos a interpretabilidade do método empregado, e recursos visuais para facilitar a interpretação por parte de um especialista de domínio. Como resultado final, a proposta de um processo confiável e de baixo custo para subsidiar novos estudos no domínio jurídico e, também, propostas de novas estratégias de extração de informações em grandes acervos de documentos. / [en] As a natural consequence of the Brazilian Judicial System’s digitization, a large and increasing number of legal documents have become available on the Internet, especially judicial decisions. As an illustration, in 2020, 25 million decisions were produced by the Brazilian Judiciary. Meanwhile, the Brazilian Supreme Court (STF), the highest judicial body in Brazil, alone has produced 99.5 thousand decisions. In line with those numbers, we face a growing demand for studies focused on extracting and exploring the legal knowledge hidden in those large collections of legal documents. However, unlike typical textual content (e.g., book, news, and blog post), the legal text constitutes a particular case of highly conventionalized language. Little attention is paid to information extraction in specialized domains such as legal texts. From a temporal perspective, the Judiciary itself is a constantly evolving institution, which molds itself to cope with the demands of society. Therefore, our goal is to propose a reliable process for legal information extraction from large collections of legal documents, based on the STF scenario and the monocratic decisions published by it between 2000 and 2018. To do so, we intend to explore the combination of different Natural Language Processing (NLP) and Information Extraction (IE) techniques on legal domain. From NLP, we explore automated named entity recognition strategies in the legal domain. From IE, we explore dynamic topic modeling with tensor decomposition as a tool to investigate the legal reasoning changes embedded in those decisions over time through textual evolution and the presence of the legal named entities. For reliability, we explore the interpretability of the methods employed. Also, we add visual resources to facilitate interpretation by a domain specialist. As a final result, we expect to propose a reliable and cost-effective process to support further studies in the legal domain and, also, to propose new strategies for information extraction on a large collection of documents.
95

A Step Toward GDPR Compliance : Processing of Personal Data in Email

Olby, Linnea, Thomander, Isabel January 2018 (has links)
The General Data Protection Regulation enforced on the 25th of may in 2018 is a response to the growing importance of IT in today’s society, accompanied by public demand for control over personal data. In contrast to the previous directive, the new regulation applies to personal data stored in an unstructured format, such as email, rather than solely structured data. Companies are now forced to accommodate to this change, among others, in order to be compliant. This study aims to provide a code of conduct for the processing of personal data in email as a measure for reaching compliance. Furthermore, this study investigates whether Named Entity Recognition (NER) can aid this process as a means of finding personal data in the form of names. A literature review of current research and recommendations was conducted for the code of conduct proposal. A NER system was constructed using a hybrid approach with Binary Logistic Regression, hand-crafted rules and gazetteers. The model was applied to a selection of emails, including attachments, obtained from a small consultancy company in the automotive industry. The proposed code of conduct consists of six items, applied to the consultancy firm. The NER-model demonstrated low ability to identify names and was therefore deemed insufficient for this task. / Dataskyddsförordningen började gälla den 25e maj 2018, och uppstod som ett svar på den okände betydelsen av IT i dagens samhälle samt allmänhetens krav på ökad kontroll över personuppgifter för den enskilde individen. Till skillnad från det tidigare direktivet, omfattar den nya förordningen även personuppgifter som är lagrad i ostrukturerad form, som till exempel e-post, snarare än endast i strukturerad form. Många företag tvingas därmed att anpassa sig efter detta, tillsammans med ett flertal andra nya krav, i syfte att efterfölja förordningen. Den här studien syftar till att lägga fram ett förslag på en uppförandekod för behandling av personuppgifter i e-post som ett verktyg för att nå medgörlighet. Utöver detta undersöks det om Named Entity Recognition (NER) kan användas som ett hjälpmedel vid identifiering av personuppgifter, mer specifikt namn. En litteraturstudie kring tidigare forskning och aktuella rekommendationer utfördes inför utformningen av uppförandekoden. Ett NER-system konstruerades med hjälp av Binär Logistisk Regression, handgjorda regler och ordlistor. Modellen applicerades på ett urval av e-postmeddelanden, med eventuella bilagor, som tillhandahölls från ett litet konsultbolag aktivt inom bilindustrin. Den rekommenderade uppförandekoden består av sex punkter, applicerade på konsultbolaget. NER-modellen påvisade en låg förmåga att identifiera namn och ansågs därför inte vara lämplig för den utsatta uppgiften.
96

Utilizing Transformers with Domain-Specific Pretraining and Active Learning to Enable Mining of Product Labels

Norén, Erik January 2023 (has links)
Structured Product Labels (SPLs), the package inserts that accompany drugs governed by the Food and Drugs Administration (FDA), hold information about Adverse Drug Reactions (ADRs) that exists associated with drugs post-market. This information is valuable for actors working in the field of pharmacovigilance aiming to improve the safety of drugs. One such actor is Uppsala Monitoring Centre (UMC), a non-profit conducting pharmacovigilance research. In order to access the valuable information of the package inserts, UMC have constructed an SPL mining pipeline in order to mine SPLs for ADRs. This project aims to investigate new approaches to the solution to the Scan problem, the part of the pipeline responsible for extracting mentions of ADRs. The Scan problem is solved by approaching the problem as a Named Entity Recognition task, a subtask of Natural Language Processing. By using the transformer-based deep learning model BERT, with domain-specific pre-training, an F1-score of 0.8220 was achieved. Furthermore, the chosen model was used in an iteration of Active Learning in order to efficiently extend the available data pool with the most informative examples. Active Learning improved the F1-score to 0.8337. However, the Active Learning was benchmarked against a data set extended with random examples, showing similar improved scores, therefore this application of Active Learning could not be determined to be effective in this project.
97

[pt] EXTRAÇÃO DE INFORMAÇÕES DE SENTENÇAS JUDICIAIS EM PORTUGUÊS / [en] INFORMATION EXTRACTION FROM LEGAL OPINIONS IN BRAZILIAN PORTUGUESE

GUSTAVO MARTINS CAMPOS COELHO 03 October 2022 (has links)
[pt] A Extração de Informação é uma tarefa importante no domínio jurídico. Embora a presença de dados estruturados seja escassa, dados não estruturados na forma de documentos jurídicos, como sentenças, estão amplamente disponíveis. Se processados adequadamente, tais documentos podem fornecer informações valiosas sobre processos judiciais anteriores, permitindo uma melhor avaliação por profissionais do direito e apoiando aplicativos baseados em dados. Este estudo aborda a Extração de Informação no domínio jurídico, extraindo valor de sentenças relacionados a reclamações de consumidores. Mais especificamente, a extração de cláusulas categóricas é abordada através de classificação, onde seis modelos baseados em diferentes estruturas são analisados. Complementarmente, a extração de valores monetários relacionados a indenizações por danos morais é abordada por um modelo de Reconhecimento de Entidade Nomeada. Para avaliação, um conjunto de dados foi criado, contendo 964 sentenças anotados manualmente (escritas em português) emitidas por juízes de primeira instância. Os resultados mostram uma média de aproximadamente 97 por cento de acurácia na extração de cláusulas categóricas, e 98,9 por cento na aplicação de NER para a extração de indenizações por danos morais. / [en] Information Extraction is an important task in the legal domain. While the presence of structured and machine-processable data is scarce, unstructured data in the form of legal documents, such as legal opinions, is largely available. If properly processed, such documents can provide valuable information with regards to past lawsuits, allowing better assessment by legal professionals and supporting data-driven applications. This study addresses Information Extraction in the legal domain by extracting value from legal opinions related to consumer complaints. More specifically, the extraction of categorical provisions is addressed by classification, where six models based on different frameworks are analyzed. Moreover, the extraction of monetary values related to moral damage compensations is addressed by a Named Entity Recognition (NER) model. For evaluation, a dataset was constructed, containing 964 manually annotated legal opinions (written in Brazilian Portuguese) enacted by lower court judges. The results show an average of approximately 97 percent of accuracy when extracting categorical provisions, and 98.9 percent when applying NER for the extraction of moral damage compensations.
98

Geo-Locating Tweets with Latent Location Information

Lee, Sunshin 13 February 2017 (has links)
As part of our work on the NSF funded Integrated Digital Event Archiving and Library (IDEAL) project and the Global Event and Trend Archive Research (GETAR) project, we collected over 1.4 billion tweets using over 1,000 keywords, key phrases, mentions, or hashtags, starting from 2009. Since many tweets talk about events (with useful location information), such as natural disasters, emergencies, and accidents, it is important to geo-locate those tweets whenever possible. Due to possible location ambiguity, finding a tweet's location often is challenging. Many distinct places have the same geoname, e.g., "Greenville" matches 50 different locations in the U.S.A. Frequently, in tweets, explicit location information, like geonames mentioned, is insufficient, because tweets are often brief and incomplete. They have a small fraction of the full location information of an event due to the 140 character limitation. Location indicative words (LIWs) may include latent location information, for example, "Water main break near White House" does not have any geonames but it is related to a location "1600 Pennsylvania Ave NW, Washington, DC 20500 USA" indicated by the key phrase 'White House'. To disambiguate tweet locations, we first extracted geospatial named entities (geonames) and predicted implicit state (e.g., Virginia or California) information from entities using machine learning algorithms including Support Vector Machine (SVM), Naive Bayes (NB), and Random Forest (RF). Implicit state information helps reduce ambiguity. We also studied how location information of events is expressed in tweets and how latent location indicative information can help to geo-locate tweets. We then used a machine learning (ML) approach to predict the implicit state using geonames and LIWs. We conducted experiments with tweets (e.g., about potholes), and found significant improvement in disambiguating tweet locations using a ML algorithm along with the Stanford NER. Adding state information predicted by our classifiers increased the possibility to find the state-level geo-location unambiguously by up to 80%. We also studied over 6 million tweets (3 mid-size and 2 big-size collections about water main breaks, sinkholes, potholes, car crashes, and car accidents), covering 17 months. We found that up to 91.1% of tweets have at least one type of location information (geo-coordinates or geonames), or LIWs. We also demonstrated that in most cases adding LIWs helps geo-locate tweets with less ambiguity using a geo-coding API. Finally, we conducted additional experiments with the five different tweet collections, and found significant improvement in disambiguating tweet locations using a ML approach with geonames and all LIWs that are present in tweet texts as features. / Ph. D.
99

L'identification des entités nommées en arabe en vue de leur extraction et classification automatiques : la construction d’un système à base de règles syntactico-sémantique / Identification of arabic named entities with a view to their automatique extraction an classification : a syntactico-semantic rule based system

Asbayou, Omar 01 December 2016 (has links)
Cette thèse explique et présente notre démarche de la réalisation d’un système à base de règles de reconnaissance et de classification automatique des EN en arabe. C’est un travail qui implique deux disciplines : la linguistique et l’informatique. L’outil informatique et les règles la linguistiques s’accouplent pour donner naissance à une nouvelle discipline ; celle de « traitement automatique des langues », qui opère sur des niveaux différents (morphosyntaxique, syntaxique, sémantique, syntactico-sémantique etc.). Nous avons donc, dans ce qui nous concerne, mis en œuvre des informations et règles linguistiques nécessaires au service du logiciel informatique, qui doit être en mesure de les appliquer, pour extraire et classifier, par des annotations syntaxiques et/ou sémantiques, les différentes classes d’entités nommées.Ce travail de thèse s’inscrit donc dans un cadre général de traitement automatique des langues, mais plus particulièrement dans la continuité des travaux réalisés au niveau de l’analyse morphosyntaxique par la conception et la réalisation des bases des données lexicales SAMIA et ensuite DIINAR avec l’ensemble de résultats de recherches qui en découlent. C’est une tâche qui vise à l’enrichissement lexical par des entités nommées simples et complexes, et qui veut établir la transition de l’analyse morphosyntaxique vers l’analyse syntaxique, et syntatico-sémantique dans une visée plus générale de l’analyse du contenu textuel. Pour comprendre de quoi il s’agit, il nous était important de commencer par la définition de l’entité nommée. Et pour mener à bien notre démarche, nous avons distingué entre deux types principaux : pur nom propre et EN descriptive. Nous avons aussi établi une classification référentielle en se basant sur diverses classes et sous-classes qui constituent la référence de nos annotations sémantiques. Cependant, nous avons dû faire face à deux difficultés majeures : l’ambiguïté lexicale et les frontières des entités nommées complexes. Notre système adopte une approche à base de règles syntactico-sémantiques. Il est constitué, après le Niveau 0 d’analyse morphosyntaxique, de cinq niveaux de construction de patrons syntaxiques et syntactico-sémantiques basés sur les informations linguistique nécessaires (morphosyntaxiques, syntaxiques, sémantique, et syntactico-sémantique). Ce travail, après évaluation en utilisant deux corpus, a abouti à de très bons résultats en précision, en rappel et en F–mesure. Les résultats de notre système ont un apport intéressant dans différents application du traitement automatique des langues notamment les deux tâches de recherche et d’extraction d’informations. En effet, on les a concrètement exploités dans les deux applications (recherche et extraction d’informations). En plus de cette expérience unique, nous envisageons par la suite étendre notre système à l’extraction et la classification des phrases dans lesquelles, les entités classifiées, principalement les entités nommées et les verbes, jouent respectivement le rôle d’arguments et de prédicats. Un deuxième objectif consiste à l’enrichissement des différents types de ressources lexicales à l’instar des ontologies. / This thesis explains and presents our approach of rule-based system of arabic named entity recognition and classification. This work involves two disciplines : linguistics and computer science. Computer tools and linguistic rules are merged to give birth to a new discipline : Natural Languge Processsing, which operates in different levels (morphosyntactic, syntactic, semantic, syntactico-semantic…). So, in our particular case, we have put the necessary linguistic information and rules to software sevice. This later should be able to apply and implement them in order to recognise and classify, by syntactic and semantic annotations, the different named entity classes.This work of thesis is incorporated within the general domain of natural language processing, but it particularly falls within the scope of the continuity of the accomplished work in terms of morphosyntactic analysis and the realisation of lexical data bases of SAMIA and then DIINAR as well as the accompanying scientific recearch. This task aimes at lexical enrichement with simple and complex named entities and at establishing the transition from the morphological analysis into syntactic and syntactico-semantic analysis. The ultimate objective is text analysis. To understand what it is about, it was important to start with named entity definition. To carry out this task, we distinguished between two main named entity types : pur proper name and descriptive named entities. We have also established a referential classification on the basis of different classes and sub-classes which constitue the reference for our semantic annotations. Nevertheless, we are confronted with two major difficulties : lexical ambiguity and the frontiers of complex named entities. Our system adoptes a syntactico-semantic rule-based approach. After Level 0 of morpho-syntactic analysis, the system is made up of five levels of syntactic and syntactico-semantic patterns based on tne necessary linguisic information (i.e. morphosyntactic, syntactic, semantic and syntactico-semantic information).This work has obtained very good results in termes of precision, recall and F-measure. The output of our system has an interesting contribution in different applications of the natural language processing especially in both tasks of information retrieval and information extraction. In fact, we have concretely exploited our system output in both applications (information retrieval and information extraction). In addition to this unique experience, we envisage in the future work to extend our system into the sentence extraction and classification, in which classified entities, mainly named entities and verbs, play respectively the role of arguments and predicates. The second objective consists in the enrichment of different types of lexical resources such as ontologies.
100

Analýza a získávání informací ze souboru dokumentů spojených do jednoho celku / Analysis and Data Extraction from a Set of Documents Merged Together

Jarolím, Jordán January 2018 (has links)
This thesis deals with mining of relevant information from documents and automatic splitting of multiple documents merged together. Moreover, it describes the design and implementation of software for data mining from documents and for automatic splitting of multiple documents. Methods for acquiring textual data from scanned documents, named entity recognition, document clustering, their supportive algorithms and metrics for automatic splitting of documents are described in this thesis. Furthermore, an algorithm of implemented software is explained and tools and techniques used by this software are described. Lastly, the success rate of the implemented software is evaluated. In conclusion, possible extensions and further development of this thesis are discussed at the end.

Page generated in 0.3966 seconds