Spelling suggestions: "subject:"[een] NLP"" "subject:"[enn] NLP""
231 |
Object Classification using Language ModelsFrom, Gustav January 2022 (has links)
In today’s modern digital world more and more emails and messengers must be sent, processed and handled. The categorizing and classification of these text pieces can take an incredibly long time and will cost the company a lot of time and money. If the classification could be done automatically by a computer dependent on the content of the text/message it would result in a major yield for the Easit AB and its customers. In order to facilitate the task of text-classification Easit needs a solution that is made out of one language model and one classifier model. The language model will convert raw text to a vector that is representative of the text and the classifier will construe what predefined labels fit for the vector. The end goal is not to create the best solution. It is simply to create a general understanding about different language and classifier models and how to build a system that will be both fast and accurate. BERT were the primary language model during evaluation but doc2Vec and One-Hot encoding was also tested. The classifier consisted out of boundary condition models or dense neural networks that were all trained without knowledge about what language model that the text vectors came from. The validation accuracy which was presented for the IMDB-comment dataset with BERT resulted between 75% to 94%, mostly dependent on the language model and not on the classifier. The knowledge from the work resulted in a recommendation to Easit for an alternativebased system solution. / I dagens moderna digitala värld är det allt mer majl-ärenden och meddelanden som ska skickas och processeras. Kategorisering och klassificering av dessa kan ta otroligt lång tid och kostar företag tid samt pengar. Om klassifieringen kunde ske automatiskt beroende på text-innehållet skulle det innebära en stor vinst för Easit AB och deras kunder. För att underlätta arbetet med text-klassifiering behöver Easit en tvådelad lösning som består utav en språkmodell och en klassifierare. Språkmodellen som omvandlar text till en vektor som representerar texten och klassifieraren tolkar vilka fördefinerade ettiketter/märken som passar för vektorn. Målet är inte att skapa den bästa lösningen utan det är att skapa en generell kunskap för hur man kan utforma ett system som kan klassifiera texten på ett träffsäkert och effektivt sätt. Vid utvärdering av olika språkmodeller användes framförallt BERT-modeller men även doc2Vec och One-Hot testas också. Klassifieraren bestod utav gränsvillkors-modeller eller dense neurala nätverk som tränades helt utan vetskap om vilken språkmodell som skickat text-vektorerna. Träffsäkerheten som uppvisades vid validering för IMDB-kommentars datasetet med BERT blev mellan 75% till 94%, primärt beroende på språkmodellen. De neuralt nätverk passar bäst som klassifierare mest på grund av deras skalbarhet med flera ettiketter. Kunskapen från arbetet resulterade i en rekommendation till Easit om en alternativbaserad systemlösning.
|
232 |
Finding Implicit Citations in Scientific Publications : Improvements to Citation Context Detection MethodsMurray, Jonathan January 2015 (has links)
This thesis deals with the task of identifying implicit citations between scientific publications. Apart from being useful knowledge on their own, the citations may be used as input to other problems such as determining an author’s sentiment towards a reference, or summarizing a paper based on what others have written about it. We extend two recently proposed methods, a Machine Learning classifier and an iterative Belief Propagation algorithm. Both are implemented and evaluated on a common pre-annotated dataset. Several changes to the algorithms are then presented, incorporating new sentence features, different semantic text similarity measures as well as combining the methods into a single classifier. Our main finding is that the introduction of new sentence features yield significantly improved F-scores for both approaches. / Detta examensarbete behandlar frågan om att hitta implicita citeringar mellan vetenskapliga publikationer. Förutom att vara intressanta på egen hand kan dessa citeringar användas inom andra problem, såsom att bedöma en författares inställning till en referens eller att sammanfatta en rapport utifrån hur den har blivit citerad av andra. Vi utgår från två nyliga metoder, en maskininlärningsbaserad klassificerare och en iterativ algoritm baserad på en grafmodell. Dessa implementeras och utvärderas på en gemensam förannoterad datamängd. Ett antal förändringar till algoritmerna presenteras i form av nya särdrag hos meningarna (eng. sentence features), olika semantiska textlikhetsmått och ett sätt att kombinera de två metoderna. Arbetets huvudsakliga resultat är att de nya meningssärdragen leder till anmärkningsvärt förbättrade F-värden för de båda metoderna.
|
233 |
Language Learning Using Models of Intentionality in Repeated Games with Cheap TalkSkaggs, Jonathan Berry 31 May 2022 (has links)
Language is critical to establishing long-term cooperative relationships among intelligent agents (including people), particularly when the agents' preferences are in conflict. In such scenarios, an agent uses speech to coordinate and negotiate behavior with its partner(s). While recent work has shown that neural language modeling can produce effective speech agents, such algorithms typically only accept previous text as input. However, in relationships among intelligent agents, not all relevant context is expressed in conversation. Thus, in this paper, we propose and analyze an algorithm, called Llumi, that incorporates other forms of context to learn to speak in long-term relationships modeled as repeated games with cheap talk. Llumi combines models of intentionality with neural language modeling techniques to learn speech from data that is relevant to the agent's current context. A user study illustrates that, while imperfect, Llumi does learn context-aware speech repeated games with cheap talk when partnered with people, including games in which it was not trained. We believe these results are useful in determining how autonomous agents can learn to use speech to facilitate successful human-agent teaming.
|
234 |
Methods for increasing cohesion in automatically extracted summaries of Swedish news articles : Using and extending multilingual sentence transformers in the data-processing stage of training BERT models for extractive text summarization / Metoder för att öka kohesionen i automatiskt extraherade sammanfattningar av svenska nyhetsartiklarAndersson, Elsa January 2022 (has links)
Developments in deep learning and machine learning overall has created a plethora of opportunities for easier training of automatic text summarization (ATS) models for producing summaries with higher quality. ATS can be split into extractive and abstractive tasks; extractive models extract sentences from the original text to create summaries. On the contrary, abstractive models generate novel sentences to create summaries. While extractive summaries are often preferred over abstractive ones, summaries created by extractive models trained on Swedish texts often lack cohesion, which affects the readability and overall quality of the summary. Therefore, there is a need to improve the process of training ATS models in terms of cohesion, while maintaining other text qualities such as content coverage. This thesis explores and implements methods at the data-processing stage aimed at improving cohesion of generated summaries. The methods are based around Sentence-BERT for creating advanced sentence embeddings that can be used to rank sentences in a text in terms of if it should be included in the extractive summary or not. Three models are trained using different methods and evaluated using ROUGE, BERTScore for measuring content coverage and Coh-Metrix for measuring cohesion. The results of the evaluation suggest that the methods can indeed be used to create more cohesive summaries, although content coverage was reduced, which gives rise to the potential for extensive future exploration of further implementation.
|
235 |
Towards terminology-based keyword extractionKrassow, Cornelius January 2022 (has links)
The digitization of information has provided an overflow of data in many areas of society, including the clinical sector. However, confidentiality issues concerning the privacy of both clinicians and patients have hampered research into how to best deal with this kind of "clinical" data. An example of clinical data which can be found in abundance are Electronic Medical Records, or EMR for short. EMRs contain information about a patient's medical history, such as summarizes of earlier visits, prescribed medications and more. These EMRs can be quite extensive and reading them in full can be time-consuming, especially when considering the often hectic nature of hospital work. Giving clinicians the ability to gain insight into what information is of importance when dealing with extensive EMRs might be very useful. Keyword extraction are methods developed in the field of language technology that aim to automatically extract the most important terms or phrases from a text. Applying these methods on EMR data successfully could help provide the clinicians with a helping hand when short on time. Clinical data are very domain-specific however, requiring different kinds of expert knowledge depending on what field of medicine is being investigated. Due to the scarcity of research on not only clinical keyword extractions but clinical data as a whole, foundational groundwork in how to best deal with the domain-specific demands of a clinical keyword extractor need to be laid. By exploring how the two unsupervised approaches YAKE! and KeyBERT deal with the domain-specific task of implant-focused keyword extraction, the limitations of clinical keyword extraction are tested. Furthermore, the performance of a general BERT model in comparison to a model finetuned on domain-specific data is investigated. Finally, an attempt is made to create a domain-specific set of gold-standard keywords by combining unsupervised approaches to keyword extraction is made. The results show that unsupervised approaches perform poorly when dealing with domain-specific tasks that do not have a clear correlation to the main domain of the data. Finetuned BERT models seem to perform almost as well as a general model when tasked with implant-focused keyword extraction, although further research is needed. Finally, the use of unsupervised approaches in conjunction with manual evaluations provided by domain experts show some promise.
|
236 |
A Security Related and Evidence-Based Holistic Ranking and Composition Framework for Distributed ServicesChowdhury, Nahida Sultana 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The number of smart mobile devices has grown at a significant rate in recent years. This growth has resulted in an exponential number of publicly available mobile Apps. To help the selection of suitable Apps, from various offered choices, the App distribution platforms generally rank/recommend Apps based on average star ratings, the number of installs, and associated reviews ― all the external factors of an App. However, these ranking schemes typically tend to ignore critical internal factors (e.g., bugs, security vulnerabilities, and data leaks) of the Apps. The AppStores need to incorporate a holistic methodology that includes internal and external factors to assign a level of trust to Apps. The inclusion of the internal factors will describe associated potential security risks. This issue is even more crucial with newly available Apps, for which either user reviews are sparse, or the number of installs is still insignificant. In such a scenario, users may fail to estimate the potential risks associated with installing Apps that exist in an AppStore.
This dissertation proposes a security-related and evidence-based ranking framework, called SERS (Security-related and Evidence-based Ranking Scheme) to compare similar Apps. The trust associated with an App is calculated using both internal and external factors (i.e., security flaws and user reviews) following an evidence-based approach and applying subjective logic principles. The SERS is formalized and further enhanced in the second part of this dissertation, resulting in its enhanced version, called as E-SERS (Enhanced SERS). These enhancements include an ability to integrate any number of sources that can generate evidence for an App and consider the temporal aspect and reputation of evidence sources. Both SERS and E-SERS are evaluated using publicly accessible Apps from the Google PlayStore and the rankings generated by them are compared with prevalent ranking techniques such as the average star ratings and the Google PlayStore Rankings. The experimental results indicate that E-SERS provides a comprehensive and holistic view of an App when compared with prevalent alternatives. E-SERS is also successful in identifying malicious Apps where other ranking schemes failed to address such vulnerabilities.
In the third part of this dissertation, the E-SERS framework is used to propose a trust-aware composition model at two different granularities. This model uses the trust score computed by E-SERS, along with the probability of an App belonging to the malicious category, as the desired attributes for selecting a composition as the two granularities. Finally, the trust-aware composition model is evaluated with the average star rating parameter and the trust score.
A holistic approach, as proposed by E-SERS, to computer a trust score will benefit all kinds of Apps including newly published Apps that follow proper security measures but initially struggle in the AppStore rankings due to a lack of a large number of reviews and ratings. Hence, E-SERS will be helpful both to the developers and users. In addition, the composition model that uses such a holistic trust score will enable system integrators to create trust-aware distributed systems for their specific needs.
|
237 |
Conversational Engine for Transportation SystemsSidås, Albin, Sandberg, Simon January 2021 (has links)
Today's communication between operators and professional drivers takes place through direct conversations between the parties. This thesis project explores the possibility to support the operators in classifying the topic of incoming communications and which entities are affected through the use of named entity recognition and topic classifications. By developing a synthetic training dataset, a NER model and a topic classification model was developed and evaluated to achieve F1-scores of 71.4 and 61.8 respectively. These results were explained by a low variance in the synthetic dataset in comparison to a transcribed dataset from the real world which included anomalies not represented in the synthetic dataset. The aforementioned models were integrated into the dialogue framework Emora to seamlessly handle the back and forth communication and generating responses.
|
238 |
Building high-quality datasets for abstractive text summarization : A filtering‐based method applied on Swedish news articlesMonsen, Julius January 2021 (has links)
With an increasing amount of information on the internet, automatic text summarization could potentially make content more readily available for a larger variety of people. Training and evaluating text summarization models require datasets of sufficient size and quality. Today, most such datasets are in English, and for minor languages such as Swedish, it is not easy to obtain corresponding datasets with handwritten summaries. This thesis proposes methods for compiling high-quality datasets suitable for abstractive summarization from a large amount of noisy data through characterization and filtering. The data used consists of Swedish news articles and their preambles which are here used as summaries. Different filtering techniques are applied, yielding five different datasets. Furthermore, summarization models are implemented by warm-starting an encoder-decoder model with BERT checkpoints and fine-tuning it on the different datasets. The fine-tuned models are evaluated with ROUGE metrics and BERTScore. All models achieve significantly better results when evaluated on filtered test data than when evaluated on unfiltered test data. Moreover, models trained on the most filtered dataset with the smallest size achieves the best results on the filtered test data. The trade-off between dataset size and quality and other methodological implications of the data characterization, the filtering and the model implementation are discussed, leading to suggestions for future research.
|
239 |
Extractive Text Summarization of Norwegian News Articles Using BERTBiniam, Thomas Indrias, Morén, Adam January 2021 (has links)
Extractive text summarization has over the years been an important research area in Natural Language Processing. Numerous methods have been proposed for extracting information from text documents. Recent works have shown great success for English summarization tasks by fine-tuning the language model BERT using large summarization datasets. However, less research has been made for low-resource languages. This work contributes by investigating how BERT can be used for Norwegian text summarization. Two models are developed by applying a modified BERT architecture, called BERTSum, on pre-trained Norwegian and Multilingual BERT. The results are models able to predict key sentences from articles to generate bullet-point summaries. These models are evaluated with the automatic metric ROUGE and in this evaluation, the Multilingual BERT model outperforms the Norwegian model. The multilingual model is further evaluated in a human evaluation by journalists, revealing that the generated summaries are not entirely satisfactory in some aspects. With some improvements, the model shows to be a valuable tool for journalists to edit and rewrite generated summaries, saving time and workload. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
|
240 |
Machine Learning Evaluation of Natural Language to Computational Thinking : On the possibilities of coding without syntaxBjörkman, Desireé January 2020 (has links)
Voice commands are used in today's society to offer services like putting events into a calendar, tell you about the weather and to control the lights at home. This project tries to extend the possibilities of voice commands by improving an earlier proof of concept system that interprets intention given in natural language to program code. This improvement was made by mixing linguistic methods and neural networks to increase accuracy and flexibility of the interpretation of input. A user testing phase was made to conclude if the improvement would attract users to the interface. The results showed possibilities of educational purposes for computational thinking and the issues to overcome to become a general programming tool.
|
Page generated in 0.0411 seconds