Spelling suggestions: "subject:"[een] BERT"" "subject:"[enn] BERT""
161 |
Discovering Implant Terms in Medical RecordsJerdhaf, Oskar January 2021 (has links)
Implant terms are terms like "pacemaker" which indicate the presence of artifacts in the body of a human. These implant terms are key to determining if a patient can safely undergo Magnetic Resonance Imaging (MRI). However, to identify these terms in medical records is time-consuming, laborious and expensive, but necessary for taking the correct precautions before an MRI scan. Automating this process is of great interest to radiologists as it ideally saves time, prevents mistakes and as a result saves lives. The electronic medical records (EMR) contain the documented medical history of a patient, including any implants or objects that an individual would have inside their body. Information about such objects and implants are of great interest when determining if and how a patient can be scanned using MRI. This information is unfortunately not easily extracted through automatic means. Due to their sparse presence and the unusual structure of medical records compared to most written text, makes it very difficult to automate using simple means. By leveraging the recent advancements in Artificial Intelligence (AI), this thesis explores the ability to identify and extract such terms automatically in Swedish EMRs. For the task of identifying implant terms in medical records a generally trained Swedish Bidirectional Encoder Representations from Transformers (BERT) model is used, which is then fine-tuned on Swedish medical records. Using this model a variety of approaches are explored two of which will be covered in this thesis. Using this model a variety of approaches are explored, namely BERT-KDTree, BERT-BallTree, Cosine Brute Force and unsupervised NER. The results show that BERT-KDTree and BERT-BallTree are the most rewarding methods. Results from both methods have been evaluated by domain experts and appear promising for such an early stage, given the difficulty of the task. The evaluation of BERT-BallTree shows that multiple methods of extraction may be preferable as they provide different but still useful terms. Cosine brute force is deemed to be an unrealistic approach due to computational and memory requirements. The NER approach was deemed too impractical and laborious to justify for this study, yet is potentially useful if not more suitable given a different set of conditions and goals. While there is much to be explored and improved, these experiments are a clear indication that automatic identification of implant terms is possible, as a large number of implant terms were successfully discovered using automated means.
|
162 |
Large-Context Question Answering with Cross-Lingual TransferSagen, Markus January 2021 (has links)
Models based around the transformer architecture have become one of the most prominent for solving a multitude of natural language processing (NLP)tasks since its introduction in 2017. However, much research related to the transformer model has focused primarily on achieving high performance and many problems remain unsolved. Two of the most prominent currently are the lack of high performing non-English pre-trained models, and the limited number of words most trained models can incorporate for their context. Solving these problems would make NLP models more suitable for real-world applications, improving information retrieval, reading comprehension, and more. All previous research has focused on incorporating long-context for English language models. This thesis investigates the cross-lingual transferability between languages when only training for long-context in English. Training long-context models in English only could make long-context in low-resource languages, such as Swedish, more accessible since it is hard to find such data in most languages and costly to train for each language. This could become an efficient method for creating long-context models in other languages without the need for such data in all languages or pre-training from scratch. We extend the models’ context using the training scheme of the Longformer architecture and fine-tune on a question-answering task in several languages. Our evaluation could not satisfactorily confirm nor deny if transferring long-term context is possible for low-resource languages. We believe that using datasets that require long-context reasoning, such as a multilingual TriviaQAdataset, could demonstrate our hypothesis’s validity.
|
163 |
Surmize: An Online NLP System for Close-Domain Question-Answering and SummarizationBergkvist, Alexander, Hedberg, Nils, Rollino, Sebastian, Sagen, Markus January 2020 (has links)
The amount of data available and consumed by people globally is growing. To reduce mental fatigue and increase the general ability to gain insight into complex texts or documents, we have developed an application to aid in this task. The application allows users to upload documents and ask domain-specific questions about them using our web application. A summarized version of each document is presented to the user, which could further facilitate their understanding of the document and guide them towards what types of questions could be relevant to ask. Our application allows users flexibility with the types of documents that can be processed, it is publicly available, stores no user data, and uses state-of-the-art models for its summaries and answers. The result is an application that yields near human-level intuition for answering questions in certain isolated cases, such as Wikipedia and news articles, as well as some scientific texts. The application shows a decrease in reliability and its prediction as to the complexity of the subject, the number of words in the document, and grammatical inconsistency in the questions increases. These are all aspects that can be improved further if used in production. / Mängden data som är tillgänglig och konsumeras av människor växer globalt. För att minska den mentala trötthet och öka den allmänna förmågan att få insikt i komplexa, massiva texter eller dokument, har vi utvecklat en applikation för att bistå i de uppgifterna. Applikationen tillåter användare att ladda upp dokument och fråga kontextspecifika frågor via vår webbapplikation. En sammanfattad version av varje dokument presenteras till användaren, vilket kan ytterligare förenkla förståelsen av ett dokument och vägleda dem mot vad som kan vara relevanta frågor att ställa. Vår applikation ger användare möjligheten att behandla olika typer av dokument, är tillgänglig för alla, sparar ingen personlig data, och använder de senaste modellerna inom språkbehandling för dess sammanfattningar och svar. Resultatet är en applikation som når en nära mänsklig intuition för vissa domäner och frågor, som exempelvis Wikipedia- och nyhetsartiklar, samt viss vetensaplig text. Noterade undantag för tillämpningen härrör från ämnets komplexitet, grammatiska korrekthet för frågorna och dokumentets längd. Dessa är områden som kan förbättras ytterligare om den används i produktionen.
|
164 |
Perspective vol. 24 no. 5 (Nov 1990)Klein, Reinder J., Seerveld, Calvin, Vander Woude, Ruth, Stadt, Albert, DeJong, Deb 31 November 1990 (has links)
No description available.
|
165 |
Perspective vol. 24 no. 5 (Nov 1990) / Perspective (Institute for Christian Studies)Klein, Reinder J., Seerveld, Calvin, Vander Woude, Ruth, Stadt, Albert, DeJong, Deb 26 March 2013 (has links)
No description available.
|
Page generated in 0.0486 seconds