• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 5
  • 3
  • 1
  • Tagged with
  • 31
  • 30
  • 30
  • 22
  • 15
  • 11
  • 11
  • 10
  • 10
  • 10
  • 10
  • 8
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Extracting structured information from Wikipedia articles to populate infoboxes

Lange, Dustin, Böhm, Christoph, Naumann, Felix January 2010 (has links)
Roughly every third Wikipedia article contains an infobox - a table that displays important facts about the subject in attribute-value form. The schema of an infobox, i.e., the attributes that can be expressed for a concept, is defined by an infobox template. Often, authors do not specify all template attributes, resulting in incomplete infoboxes. With iPopulator, we introduce a system that automatically populates infoboxes of Wikipedia articles by extracting attribute values from the article's text. In contrast to prior work, iPopulator detects and exploits the structure of attribute values for independently extracting value parts. We have tested iPopulator on the entire set of infobox templates and provide a detailed analysis of its effectiveness. For instance, we achieve an average extraction precision of 91% for 1,727 distinct infobox template attributes. / Ungefähr jeder dritte Wikipedia-Artikel enthält eine Infobox - eine Tabelle, die wichtige Fakten über das beschriebene Thema in Attribut-Wert-Form darstellt. Das Schema einer Infobox, d.h. die Attribute, die für ein Konzept verwendet werden können, wird durch ein Infobox-Template definiert. Häufig geben Autoren nicht für alle Template-Attribute Werte an, wodurch unvollständige Infoboxen entstehen. Mit iPopulator stellen wir ein System vor, welches automatisch Infoboxen von Wikipedia-Artikeln durch Extrahieren von Attributwerten aus dem Artikeltext befüllt. Im Unterschied zu früheren Arbeiten erkennt iPopulator die Struktur von Attributwerten und nutzt diese aus, um die einzelnen Bestandteile von Attributwerten unabhängig voneinander zu extrahieren. Wir haben iPopulator auf der gesamten Menge der Infobox-Templates getestet und analysieren detailliert die Effektivität. Wir erreichen beispielsweise für die Extraktion einen durchschnittlichen Precision-Wert von 91% für 1.727 verschiedene Infobox-Template-Attribute.
2

Language Engineering for Information Extraction

Schierle, Martin 10 January 2012 (has links) (PDF)
Accompanied by the cultural development to an information society and knowledge economy and driven by the rapid growth of the World Wide Web and decreasing prices for technology and disk space, the world\'s knowledge is evolving fast, and humans are challenged with keeping up. Despite all efforts on data structuring, a large part of this human knowledge is still hidden behind the ambiguities and fuzziness of natural language. Especially domain language poses new challenges by having specific syntax, terminology and morphology. Companies willing to exploit the information contained in such corpora are often required to build specialized systems instead of being able to rely on off the shelf software libraries and data resources. The engineering of language processing systems is however cumbersome, and the creation of language resources, annotation of training data and composition of modules is often enough rather an art than a science. The scientific field of Language Engineering aims at providing reliable information, approaches and guidelines of how to design, implement, test and evaluate language processing systems. Language engineering architectures have been a subject of scientific work for the last two decades and aim at building universal systems of easily reusable components. Although current systems offer comprehensive features and rely on an architectural sound basis, there is still little documentation about how to actually build an information extraction application. Selection of modules, methods and resources for a distinct usecase requires a detailed understanding of state of the art technology, application demands and characteristics of the input text. The main assumption underlying this work is the thesis that a new application can only occasionally be created by reusing standard components from different repositories. This work recapitulates existing literature about language resources, processing resources and language engineering architectures to derive a theory about how to engineer a new system for information extraction from a (domain) corpus. This thesis was initiated by the Daimler AG to prepare and analyze unstructured information as a basis for corporate quality analysis. It is therefore concerned with language engineering in the area of Information Extraction, which targets the detection and extraction of specific facts from textual data. While other work in the field of information extraction is mainly concerned with the extraction of location or person names, this work deals with automotive components, failure symptoms, corrective measures and their relations in arbitrary arity. The ideas presented in this work will be applied, evaluated and demonstrated on a real world application dealing with quality analysis on automotive domain language. To achieve this goal, the underlying corpus is examined and scientifically characterized, algorithms are picked with respect to the derived requirements and evaluated where necessary. The system comprises language identification, tokenization, spelling correction, part of speech tagging, syntax parsing and a final relation extraction step. The extracted information is used as an input to data mining methods such as an early warning system and a graph based visualization for interactive root cause analysis. It is finally investigated how the unstructured data facilitates those quality analysis methods in comparison to structured data. The acceptance of these text based methods in the company\'s processes further proofs the usefulness of the created information extraction system.
3

Deep Text Mining of Instagram Data Without Strong Supervision / Textutvinning från Instagram utan Precis Övervakning

Hammar, Kim January 2018 (has links)
With the advent of social media, our online feeds increasingly consist of short, informal, and unstructured text. This data can be analyzed for the purpose of improving user recommendations and detecting trends. The grand volume of unstructured text that is available makes the intersection of text processing and machine learning a promising avenue of research. Current methods that use machine learning for text processing are in many cases dependent on annotated training data. However, considering the heterogeneity and variability of social media, obtaining strong supervision for social media data is in practice both difficult and expensive. In light of this limitation, a belief that has put its marks on this thesis is that the study of text mining methods that can be applied without strong supervision is of a higher practical interest. This thesis investigates unsupervised methods for scalable processing of text from social media. Particularly, the thesis targets a classification and extraction task in the fashion domain on the image-sharing platform Instagram. Instagram is one of the largest social media platforms, containing both text and images. Still, research on text processing in social media is to a large extent limited to Twitter data, and little attention has been paid to text mining of Instagram data. The aim of this thesis is to broaden the scope of state-of-the-art methods for information extraction and text classification to the unsupervised setting, working with informal text on Instagram. Its main contributions are (1) an empirical study of text from Instagram; (2) an evaluation of word embeddings for Instagram text; (3) a distributed implementation of the FastText algorithm; (4) a system for fashion attribute extraction in Instagram using word embeddings; and (5) a multi-label clothing classifier for Instagram text, built with deep learning techniques and minimal supervision. The empirical study demonstrates that the text distribution on Instagram exhibits the long-tail phenomenon, that the text is just as noisy as have been reported in studies on Twitter text, and that comment sections are multi-lingual. In experiments with word embeddings for Instagram, the importance of hyperparameter tuning is manifested and a mismatch between pre-trained embeddings and social media is observed. Furthermore, that word embeddings are a useful asset for information extraction is confirmed. Experimental results show that word embeddings beats a baseline that uses Levenshtein distance on the task of extracting fashion attributes from Instagram. The results also show that the distributed implementation of FastText reduces the time it takes to train word embeddings with a factor that scales with the number of machines used for training. Finally, our research demonstrates that weak supervision can be used to train a deep classifier, achieving an F1 score of 0.61 on the task of classifying clothes in Instagram posts based only on the associated text, which is on par with human performance. / I och med uppkomsten av sociala medier så består våra online-flöden till stor del av korta och informella textmeddelanden, denna data kan analyseras med syftet att upptäcka trender och ge användarrekommendationer. Med tanke på den stora volymen av ostrukturerad text som finns tillgänglig så är kombinationen av språkteknologi och maskinlärning ett forskningsområde med stor potential. Nuvarande maskinlärningsteknologier för textbearbetning är i många fall beroende av annoterad data för träning. I praktiken så är det dock både komplicerat och dyrt att anskaffa annoterad data av hög kvalitet, inte minst vad gäller data från sociala medier, med tanke på hur pass föränderlig och heterogen sociala medier är som datakälla. En övertygelse som genomsyrar denna avhandling är att textutvinnings metoder som inte kräver precis övervakning har större potential i praktiken. Denna avhandling undersöker oövervakade metoder för skalbar bearbetning av text från sociala medier. Specifikt så täcker avhandlingen ett komplext klassifikations- och extraktions- problem inom modebranschen på bilddelningsplattformen Instagram. Instagram hör till de mest populära sociala plattformarna och innehåller både bilder och text. Trots det så är forskning inom textutvinning från sociala medier till stor del begränsad till data från Twitter och inte mycket uppmärksamhet har givits de stora möjligheterna med textutvinning från Instagram. Ändamålet med avhandlingen är att förbättra nuvarande metoder som används inom textklassificering och informationsextraktion, samt göra dem applicerbara för oövervakad maskinlärning på informell text från Instagram. De primära forskningsbidragen i denna avhandling är (1) en empirisk studie av text från Instagram; (2) en utvärdering av ord-vektorer för användning med text från Instagram; (3) en distribuerad implementation av FastText algoritmen; (4) ett system för extraktion av kläddetaljer från Instagram som använder ord-vektorer; och (5) en flerkategorisk kläd-klassificerare för text från Instagram, utvecklad med djupinlärning och minimal övervakning. Den empiriska studien visar att textdistributionen på Instagram har en lång svans, att texten är lika informell som tidigare rapporterats från studier på Twitter, samt att kommentarssektionerna är flerspråkiga. Experiment med ord-vektorer för Instagram understryker vikten av att justera parametrar före träningsprocessen, istället för att använda förbestämda värden. Dessutom visas att ord-vektorer tränade på formell text är missanpassade för applikationer som bearbetar informell text. Vidare så påvisas att ord-vektorer är effektivt för informationsextraktion i sociala medier, överlägsen ett standardvärde framtaget med informationsextraktion baserat på syntaktiskt ordlikhet. Resultaten visar även att den distribuerade implementationen av FastText kan minska tiden det tar att träna ord-vektorer med en faktor som beror på antalet maskiner som används i träningen. Slutligen, vår forskning indikerar att svag övervakning kan användas för att träna en klassificerare med djupinlärning. Den tränade klassificeraren uppnår ett F1 resultat av 0.61 på uppgiften att klassificera kläddetaljer av bilder från Instagram, baserat endast på bildtexten och tillhörande användarkommentarer, vilket är i nivå med mänsklig förmåga.
4

Unsupervised Natural Language Processing for Knowledge Extraction from Domain-specific Textual Resources

Hänig, Christian 25 April 2013 (has links) (PDF)
This thesis aims to develop a Relation Extraction algorithm to extract knowledge out of automotive data. While most approaches to Relation Extraction are only evaluated on newspaper data dealing with general relations from the business world their applicability to other data sets is not well studied. Part I of this thesis deals with theoretical foundations of Information Extraction algorithms. Text mining cannot be seen as the simple application of data mining methods to textual data. Instead, sophisticated methods have to be employed to accurately extract knowledge from text which then can be mined using statistical methods from the field of data mining. Information Extraction itself can be divided into two subtasks: Entity Detection and Relation Extraction. The detection of entities is very domain-dependent due to terminology, abbreviations and general language use within the given domain. Thus, this task has to be solved for each domain employing thesauri or another type of lexicon. Supervised approaches to Named Entity Recognition will not achieve reasonable results unless they have been trained for the given type of data. The task of Relation Extraction can be basically approached by pattern-based and kernel-based algorithms. The latter achieve state-of-the-art results on newspaper data and point out the importance of linguistic features. In order to analyze relations contained in textual data, syntactic features like part-of-speech tags and syntactic parses are essential. Chapter 4 presents machine learning approaches and linguistic foundations being essential for syntactic annotation of textual data and Relation Extraction. Chapter 6 analyzes the performance of state-of-the-art algorithms of POS tagging, syntactic parsing and Relation Extraction on automotive data. The findings are: supervised methods trained on newspaper corpora do not achieve accurate results when being applied on automotive data. This is grounded in various reasons. Besides low-quality text, the nature of automotive relations states the main challenge. Automotive relation types of interest (e. g. component – symptom) are rather arbitrary compared to well-studied relation types like is-a or is-head-of. In order to achieve acceptable results, algorithms have to be trained directly on this kind of data. As the manual annotation of data for each language and data type is too costly and inflexible, unsupervised methods are the ones to rely on. Part II deals with the development of dedicated algorithms for all three essential tasks. Unsupervised POS tagging (Chapter 7) is a well-studied task and algorithms achieving accurate tagging exist. All of them do not disambiguate high frequency words, only out-of-lexicon words are disambiguated. Most high frequency words bear syntactic information and thus, it is very important to differentiate between their different functions. Especially domain languages contain ambiguous and high frequent words bearing semantic information (e. g. pump). In order to improve POS tagging, an algorithm for disambiguation is developed and used to enhance an existing state-of-the-art tagger. This approach is based on context clustering which is used to detect a word type’s different syntactic functions. Evaluation shows that tagging accuracy is raised significantly. An approach to unsupervised syntactic parsing (Chapter 8) is developed in order to suffice the requirements of Relation Extraction. These requirements include high precision results on nominal and prepositional phrases as they contain the entities being relevant for Relation Extraction. Furthermore, accurate shallow parsing is more desirable than deep binary parsing as it facilitates Relation Extraction more than deep parsing. Endocentric and exocentric constructions can be distinguished and improve proper phrase labeling. unsuParse is based on preferred positions of word types within phrases to detect phrase candidates. Iterating the detection of simple phrases successively induces deeper structures. The proposed algorithm fulfills all demanded criteria and achieves competitive results on standard evaluation setups. Syntactic Relation Extraction (Chapter 9) is an approach exploiting syntactic statistics and text characteristics to extract relations between previously annotated entities. The approach is based on entity distributions given in a corpus and thus, provides a possibility to extend text mining processes to new data in an unsupervised manner. Evaluation on two different languages and two different text types of the automotive domain shows that it achieves accurate results on repair order data. Results are less accurate on internet data, but the task of sentiment analysis and extraction of the opinion target can be mastered. Thus, the incorporation of internet data is possible and important as it provides useful insight into the customer\'s thoughts. To conclude, this thesis presents a complete unsupervised workflow for Relation Extraction – except for the highly domain-dependent Entity Detection task – improving performance of each of the involved subtasks compared to state-of-the-art approaches. Furthermore, this work applies Natural Language Processing methods and Relation Extraction approaches to real world data unveiling challenges that do not occur in high quality newspaper corpora.
5

WebKnox: Web Knowledge Extraction

Urbansky, David 21 August 2009 (has links) (PDF)
This thesis focuses on entity and fact extraction from the web. Different knowledge representations and techniques for information extraction are discussed before the design for a knowledge extraction system, called WebKnox, is introduced. The main contribution of this thesis is the trust ranking of extracted facts with a self-supervised learning loop and the extraction system with its composition of known and refined extraction algorithms. The used techniques show an improvement in precision and recall in most of the matters for entity and fact extractions compared to the chosen baseline approaches.
6

Coreference Resolution for Swedish / Koreferenslösning för svenska

Vällfors, Lisa January 2022 (has links)
This report explores possible avenues for developing coreference resolution methods for Swedish. Coreference resolution is an important topic within natural language processing, as it is used as a preprocessing step in various information extraction tasks. The topic has been studied extensively for English, but much less so for smaller languages such as Swedish. In this report we adapt two coreference resolution algorithms that were originally used for English, for use on Swedish texts. One algorithm is entirely rule-based, while the other uses machine learning. We have also annotated a Swedish dataset to be used for training and evaluation. Both algorithms showed promising results and as none clearly outperformed the other we can conclude that both would be good candidates for further development. For the rule-based algorithm more advanced rules, especially ones that could incorporate some semantic knowledge, was identified as the most important avenue of improvement. For the machine learning algorithm more training data would likely be the most beneficial. For both algorithms improved detection of mention spans would also help, as this was identified as one of the most error-prone components. / I denna rapport undersöks möjliga metoder för koreferenslösning för svenska. Koreferenslösning är en viktig uppgift inom språkteknologi, eftersom det utgör ett första steg i många typer av informationsextraktion. Uppgiften har studerats utförligt för flera större språk, framförallt engelska, men är ännu relativt outforskad för svenska och andra mindre språk. I denna rapport har vi anpassat två algoritmer som ursprungligen utvecklades för engelska för användning på svensk text. Den ena algoritmen bygger på maskininlärning och den andra är helt regelbaserad. Vi har också annoterat delar av Talbankens korpus med koreferensrelationer, för att användas för träning och utvärdering av koreferenslösningsalgoritmer. Båda algoritmerna visade lovande resultat, och ingen var tydligt bättre än den andra. Bägge vore därför lämpliga alternativ för vidareutveckling. För ML-algoritmen vore mer träningsdata den viktigaste punkten för förbättring, medan den regelbaserade algoritmen skulle kunna förbättras med mer komplexa regler, för att inkorporera exempelvis semantisk information i besluten. Ett annat viktigt utvecklingsområde är identifieringen av de fraser som utvärderas för möjlig koreferens, eftersom detta steg introducerade många fel i bägge algoritmerna.
7

Aktives Lernen für Informationsextraktion aus historischen Karten

van Dijk, Thomas C. 24 October 2019 (has links)
Es gibt viele praktische Probleme im GIS, die derzeit nicht automatisch gelöst werden können, nicht weil unsere Algorithmen zu langsam sind, sondern weil wir überhaupt keinen zufriedenstellenden Algorithmus haben. Dies kann vorkommen, wenn es um Semantik geht, z. B. beim Extrahieren von Informationen oder beim Entwerfen von Visualisierungen. Von einem Computer kann derzeit nicht erwartet werden, dass er solche Probleme völlig unbeaufsichtigt löst. Darum betrachten wir den menschlichen Einsatz explizit als Ressource. Ein Algorithmus soll so viel Arbeit wie möglich in hoher Qualität leisten – aber entscheidend ist auch, dass er intelligent genug ist, um zu sehen, wo er Hilfe braucht, was er den Benutzer fragen sollte und wie er dessen Antworten berücksichtigt. Dieses Konzept bezieht sich auf neue Bereiche der Informatik wie aktives Lernen, aber wir legen den Fokus auf das richtige Design und die Analyse von Algorithmen und den daraus resultierenden Dialog zwischen Algorithmus und Mensch, den wir algorithmisch geführte Benutzerinteraktion nennen. Dieser Ansatz soll auf die Informationsextraktion aus historischen Karten angewandt werden.
8

Language Engineering for Information Extraction

Schierle, Martin 12 July 2011 (has links)
Accompanied by the cultural development to an information society and knowledge economy and driven by the rapid growth of the World Wide Web and decreasing prices for technology and disk space, the world\''s knowledge is evolving fast, and humans are challenged with keeping up. Despite all efforts on data structuring, a large part of this human knowledge is still hidden behind the ambiguities and fuzziness of natural language. Especially domain language poses new challenges by having specific syntax, terminology and morphology. Companies willing to exploit the information contained in such corpora are often required to build specialized systems instead of being able to rely on off the shelf software libraries and data resources. The engineering of language processing systems is however cumbersome, and the creation of language resources, annotation of training data and composition of modules is often enough rather an art than a science. The scientific field of Language Engineering aims at providing reliable information, approaches and guidelines of how to design, implement, test and evaluate language processing systems. Language engineering architectures have been a subject of scientific work for the last two decades and aim at building universal systems of easily reusable components. Although current systems offer comprehensive features and rely on an architectural sound basis, there is still little documentation about how to actually build an information extraction application. Selection of modules, methods and resources for a distinct usecase requires a detailed understanding of state of the art technology, application demands and characteristics of the input text. The main assumption underlying this work is the thesis that a new application can only occasionally be created by reusing standard components from different repositories. This work recapitulates existing literature about language resources, processing resources and language engineering architectures to derive a theory about how to engineer a new system for information extraction from a (domain) corpus. This thesis was initiated by the Daimler AG to prepare and analyze unstructured information as a basis for corporate quality analysis. It is therefore concerned with language engineering in the area of Information Extraction, which targets the detection and extraction of specific facts from textual data. While other work in the field of information extraction is mainly concerned with the extraction of location or person names, this work deals with automotive components, failure symptoms, corrective measures and their relations in arbitrary arity. The ideas presented in this work will be applied, evaluated and demonstrated on a real world application dealing with quality analysis on automotive domain language. To achieve this goal, the underlying corpus is examined and scientifically characterized, algorithms are picked with respect to the derived requirements and evaluated where necessary. The system comprises language identification, tokenization, spelling correction, part of speech tagging, syntax parsing and a final relation extraction step. The extracted information is used as an input to data mining methods such as an early warning system and a graph based visualization for interactive root cause analysis. It is finally investigated how the unstructured data facilitates those quality analysis methods in comparison to structured data. The acceptance of these text based methods in the company\''s processes further proofs the usefulness of the created information extraction system.
9

A visual approach to web information extraction : Extracting information from e-commerce web pages using object detection

Brokking, Alexander January 2023 (has links)
Internets enorma omfattning har resulterat i ett överflöd av information som är oorganiserad och spridd över olika hemsidor. Det har varit motivationen för automatisk informationsextraktion av hemsidor sedan internets begynnelse. Nuvarande strategier använder främst heuristik och metoder för naturlig språkbehandling på HTML-koden för hemsidorna. Med tanke på att hemsidor utformas för att vara visuella och interaktiva för mänsklig användning utforskar denna studie potentialen för datorseendebaserade metoder för informationsextraktion från webben. I denna studie tränas och utvärderas state-of-the-art modeller för objektigenkänning i flera experiment på dataset av e-handelswebbplatser för att utvärdera modellernas potential. Resultaten indikerar att en förtränad Conditional DETR-arkitektur med en ResNet50 ryggrad kan finjusteras för att konsekvent identifiera måletiketter från nya domäner med ett mAP_50 >80%. Visuell extraktion på nya exempel inom kända domänstrukturer visade en ännu högre mAP_50 över 98%. Slutligen granskar denna studie den nuvarande litteraturen för dataset som kan användas inom visuell extraktion och belyser vikten av domänmångfald i träningsdata. Genom detta arbete ges initiala insikter i tillämpningen av datorseende inom informationsextraktion från webben, i hopp om att inspirera vidare forskning i denna riktning. / The vastness of the internet has resulted in an abundance of information that is unorganized and dispersed across numerous web pages. This has been the motivation for automatic web page extraction since the dawn of the internet era. Current strategies primarily employ heuristics and natural language processing methods to the HTML of web pages. However, considering the visual and interactive nature of web pages designed for human use, this thesis explores the potential of computer-vision-based approaches for web page extraction. In this thesis, state-of-the-art object detection models are trained and evaluated in several experiments on datasets of e-commerce websites to determine their viability. The results indicate that a pre-trained Conditional DETR architecture with a ResNet50 backbone can be fine-tuned to consistently identify target labels of new domains with an mAP_50 >80%. Visual extraction on new examples within known domain structures showed an even higher mAP_50 above 98%. Finally, this thesis surveys the state-of-the datasets that can be used for visual extraction and highlights the importance of domain diversity in the training data. Through this work, initial insights are offered into the application of computer vision in web page extraction, with the hope of inspiring further research in this direction.
10

Exploring GPT models as biomedical knowledge bases : By evaluating prompt methods for extracting information from language models pre-trained on scientific articles

Hellberg, Ebba January 2023 (has links)
Scientific findings recorded in literature help continuously guide scientific advancements, but manual approaches to accessing that knowledge are insufficient due to the sheer quantity of information and data available. Although pre-trained language models are being explored for their utility as knowledge bases and structured data repositories, there is a lack of research for this application in the biomedical domain. Therefore, the aim in this project was to determine how Generative Pre-trained Transformer models pre-trained on articles in the biomedical domain can be used to make relevant information more accessible. Several models (BioGPT, BioGPT-Large, and BioMedLM) were evaluated on the task of extracting chemical-protein relations between entities directly from the models through prompting. Prompts were formulated as a natural language text or an ordered triple, and provided in different settings (few-shot, one-shot, or zero-shot). Model-predictions were evaluated quantitatively as a multiclass classification task using a macro-averaged F1-score. The result showed that out of the explored methods, the best performance for extracting chemical-protein relations from article-abstracts was obtained using a triple-based text prompt on the largest model, BioMedLM, in the few-shot setting, albeit with low improvements from the baseline (+0.019 F1). There was no clear pattern for which prompt setting was favourable in terms of task performance, however, the triple based prompt was generally more robust than the natural language formulation. The task performance of the two smaller models underperformed the random baseline (by at best -0.026 and -0.001 F1). The impact of the prompt method was minimal in the smallest model, and the one-shot setting was the least sensitive to the prompt formulation in all models. However, there were more pronounced differences between the prompt methods in the few-shot setting of the larger models (+0.021-0.038 F1). The results suggested that the method of prompting and the size of the model impact the knowledge eliciting performance of a language model. Admittedly, the models mostly underperformed the baseline and future work needs to look into how to adapt generative language models to solve this task. Future research could investigate what impact automatic prompt-design methods and larger in-domain models have on the model performance. / De vetenskapliga upptäckter som presenteras inom litteraturen vägleder kontinuerligt vetenskapliga framsteg. Manuella tillvägagångssätt för att ta del av den kunskapen är otillräckliga på grund av den enorma mängd information och data som finns tillgänglig. Även om för-tränade språkmodeller utforskas för sin brukbarhet som kunskapsbaser och strukturerade dataförråd så finns det en brist på forskning inom den biomedicinska domänen. Målet med detta projekt var att utreda hur Generative Pre-trained Transformer (GPT) modeller för-tränade på biomedicinska artiklar kan användas för att öka tillgängligheten av relevant information inom denna domän. Olika modeller (BioGPT, BioGPT-Large, och BioMedLM) utvärderas på uppgiften att extrahera relationsinformation mellan entiteter direkt ur modellen genom en textprompt. En prompt formuleras genom naturlig text och som en ordnad trippel, och används i olika demonstrationsmiljöer (few-shot, one-shot, zero-shot). Modellförutsägelser utvärderas kvantitativt som ett multi-klass klassifikationsproblem genom ett genomsnittligt F1 värde. Resultatet indikerade att kemikalie-protein relationer från vetenskapliga artikelsammanfattningar kan extraheras med en högre sannolikhet än slumpen med en trippelbaserad prompt genom den största modellen, BioMedLM, i few-shot-miljön, dock med små förbättringar från baslinjen (+0.019 F1). Resultatet visade inga tydliga mönster gällande vilken demonstrationsmiljö som var mest gynnsam, men den trippelbaserade formuleringen var generellt mer robust än formuleringen som följde naturligt språk. Uppgiftsprestandan på de två mindre modellerna underpresterade den slumpmässiga baslinjen (med som bäst -0.026 och -0.001 F1). Effekten av valet av promptmetod var minimal med den minsta modellen, och one-shot-miljön var minst känslig för olika formuleringar hos alla modeller. Dock fanns det mer markanta skillnader mellan promptmetoder i few-shot-miljön hos de större modellerna (+0.021-0.038 F1). Resultatet antydde att valet av promptmetod och storleken på modell påverkar modellens förmåga att extrahera information. De utvärderade modellerna underpresterade dock baslinjen och fortsatt efterforskning behöver se över hur generativa språkmodeller kan anpassas för att lösa denna uppgift. Framtida forskning kan även undersöka vilken effekt automatiska promptdesignmetoder och större domänmodeller har på modellprestanda.

Page generated in 0.5469 seconds