• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 140
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 166
  • 95
  • 80
  • 70
  • 67
  • 52
  • 50
  • 48
  • 48
  • 47
  • 46
  • 45
  • 45
  • 42
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

NLP-Assisted Workflow Improving Bug Ticket Handling

Eriksson, Caroline, Kallis, Emilia January 2021 (has links)
Software companies spend a lot of resources on debugging, a process where previous solutions can help in solving current problems. The bug tickets, containing this information, are often time-consuming to read. To minimize the time spent on debugging and to make sure that the knowledge from prior solutions is kept in the company, an evaluation was made to see if summaries could make this process more efficient. Abstractive and extractive summarization models were tested for this task and fine-tuning of the bert-extractive-summarizer was performed. The model-generated summaries were compared in terms of perceived quality, speed, similarity to each other, and summarization length. The average description summary contained part of the description needed and the found solution was either well documented or did not answer the problem at all. The fine-tuned extractive model and the abstractive model BART provided good conditions for generating summaries containing all the information needed. / Vid mjukvaruutveckling går mycket resurser åt till felsökning, en process där tidigare lösningar kan hjälpa till att lösa aktuella problem. Det är ofta tidskrävande att läsa felrapporterna som innehåller denna information. För att minimera tiden som läggs på felsökning och säkerställa att kunskap från tidigare lösningar bevaras inom företaget, utvärderades om sammanfattningar skulle kunna effektivisera detta. Abstrakta och extraherande sammanfattningsmodeller testades för uppgiften och en finjustering av bert-extractive- summarizer gjordes. De genererade sammanfattningarna jämfördes i avseende på upplevd kvalitet, genereringshastighet, likhet mellan varandra och sammanfattningslängd. Den genomsnittliga sammanfattningen innehöll delar av den viktigaste informationen och den föreslagna lösningen var antingen väldokumenterad eller besvarade inte problembeskrivningen alls. Den finjusterade BERT och den abstrakta modellen BART visade goda förutsättningar för att generera sammanfattningar innehållande all den viktigaste informationen.
122

African-American Men and a Journey Through Musical Theatre and Opera

McCloud, Shonn 01 May 2014 (has links)
The purpose of this study is to outline the origins of African-American men in musical theatre, uncover their contributions to the art form, and explore how their legacy is continued today. I was inspired to do this research because through my undergraduate curriculum I have only narrowly studied African-American men in musical theatre and opera history. Upon realizing the lack of attention to this subject matter, not only in my curriculum but in historical resources, I was inspired to address the need for this research. The courses I have taken included Theatre History 1 and 2 and Musical Theatre History 1 and 2; recognition of African-Americans in the theatrical arts has been discussed at a minimal level. The majority of African-American studies in these classes focus on minstrelsy and its contribution to American musical theatre. Minstrelsy was an American form of entertainment consisting of variety acts, dancing, and music during the early 1900s. The shows were a mockery of African-Americans with white (Sometimes Black) men dressing themselves in clown-like costumes and black face paint to depict a caricature of blacks. Throughout my coursework I have found there is still a presence of Minstrelsy in the framework of American musical theatre today. Understanding how minstrelsy influenced musical theatre led me to research Bert Williams, a pioneer African-American performer both in minstrelsy and American theatre. Bert Williams broke racial barriers, allowing African-Americans to perform alongside whites and gain proper show billing. This not only influenced theatre, but the social temperature of the time as well, as the stereotype of African-Americans in society slowly began to be broken down, and whites having the opportunity to see African-Americans as normal people aided in the seeding and progression of the civil rights movement. To further study the works and life of Bert Williams, I learned and performed his iconic song, "Nobody." The song is a commentary of how Williams is overlooked because he is an African-American man. It talks about how he is expected to be funny and make a mockery of himself at the expense of himself. In researching the historical context and gaining an understanding of the content within the song, I was able to better understand other roles I have played in various musicals. This gave me a different perspective to the subject matter of racism within a show. Furthermore, it allowed me to view the evolution of African-American roles in musical theatre, and how they originated in vaudevillian shows. A subject of which I had never explored within my classes. Williams had a very successful and influential career and became the basis for my research. However, as I began my exploration, I realized there were a vast variety of men of color who either contributed as much, if not more, to the progression of African-American men in musical theatre and opera. Bert Williams, Todd Duncan, and Paul Robeson all forged careers in musical theatre and/or opera. These men aided in presenting African-American men in realistic settings and not as stereotyped caricatures. African-American men in musical theatre and opera are typically overlooked for their contribution to the art forms. However, Bert Williams, Todd Duncan, and Paul Robeson were trailblazers for African-American men in musical theatre and opera; utilizing their status and fame to make political change and fight for equal rights, both on and off stage. Their legacy is seen in the art form through the structure of musical theatre, the content of the musical comedy that led to the musical drama, and through the integration of the African-American performer in both musical theatre and opera. In continuation of their legacy, we see more roles in shows for African-American men and a growing interest in shows with African-Americans. The recent opening and revivals of shows like Porgy and Bess, Motown: The Musical, and Kinky Boots all feature leading African-American men on stage. My duty as a young African-American practitioner of both musical theatre and opera is to continue their legacy through both my studies and performance. I am honored to be a part of their legacy, furthering their contributions, and bringing light to their stories through my research and analysis.
123

Extracting Known Side Effects from Summaries of Product Characteristics (SmPCs) Provided in PDF Format by the European Medicines Agency (EMA) using BERT and Python

Buakhao, Rinyarat January 2024 (has links)
Medicines and vaccines have revolutionized disease prevention and treatment, offering numerous benefits. However, they also raise concerns about Adverse Drug Reactions (ADRs), which can have severe consequences. Summaries of Product Characteristics (SmPCs), provided by the European Medicines Agency (EMA), and Structured Product Labelings (SPLs), provided by the Food and Drug Administration (FDA), are valuable sources of information on drug-ADR relations. Understanding these relations is crucial as it contributes to establishing labeled datasets for known ADRs and advancing statistical assessment methods. Uppsala Monitoring Centre (UMC) has developed a text mining pipeline to extract known ADRs from SPLs. While the pipeline works effectively with SPLs, it faces challenges with SmPCs provided in PDF format. This study explores extending the scanner component of the pipeline by utilizing Python PDF extraction libraries to extract text from SmPCs and fine-tuning domain-specific pre-trained BERT-based models for Named Entity Recognition (NER), which is a Natural Language Processing (NLP) task, aiming to identify known ADRs from SmPCs. The investigation finds pypdfium2 [1] to be the optimal Python PDF extraction library, and fine-tuned PubMedBERT—a domain-specific language model pre-training from scratch [2]—for the NER task achieves the best performance in identifying ADRs from SmPCs. The model's performance, evaluated using entity-level evaluation metrics including Exact, Covering, and Overlap match metrics, achieves F1-scores of 0.9138, 0.9268, and 0.9671, respectively, indicating significantly good performance. Consequently, the extension model investigated in this study will be integrated into the existing pipeline by UMC professionals.
124

ENHANCING ELECTRONIC HEALTH RECORDS SYSTEMS AND DIAGNOSTIC DECISION SUPPORT SYSTEMS WITH LARGE LANGUAGE MODELS

Furqan Ali Khan (19203916) 26 July 2024 (has links)
<p dir="ltr">Within Electronic Health Record (EHR) Systems, physicians face extensive documentation, leading to alarming mental burnout. The disproportionate focus on data entry over direct patient care underscores a critical concern. Integration of Natural Language Processing (NLP) powered EHR systems offers relief by reducing time and effort in record maintenance.</p><p dir="ltr">Our research introduces the Automated Electronic Health Record System, which not only transcribes dialogues but also employs advanced clinical text classification. With an accuracy exceeding 98.97%, it saves over 90% of time compared to manual entry, as validated on MIMIC III and MIMIC IV datasets.</p><p dir="ltr">In addition to our system's advancements, we explore integration of Diagnostic Decision Support System (DDSS) leveraging Large Language Models (LLMs) and transformers, aiming to refine healthcare documentation and improve clinical decision-making. We explore the advantages, like enhanced accuracy and contextual understanding, as well as the challenges, including computational demands and biases, of using various LLMs.</p>
125

Funnel Vision

Grainger, David 01 January 2008 (has links)
This paper will talk about the videos and sculptural installation in my thesis exhibition. Shooting videos outside of the studio developed into a project overarching any individual video or its particular signs. Thus, this paper will focus on the video project with examples that follow a timeline of development, rather than the actual 6 videos on display in the exhibit. The two-part sculpture "Deer in the Headlights" is created in the context of these videos, and coexists with them in a specific architectural space. This space, as well as the clichéd meaning of the deer's gaze, have a relation to the title of the show.
126

Context matters : Classifying Swedish texts using BERT's deep bidirectional word embeddings

Holmer, Daniel January 2020 (has links)
When classifying texts using a linear classifier, the texts are commonly represented as feature vectors. Previous methods to represent features as vectors have been unable to capture the context of individual words in the texts, in theory leading to a poor representation of natural language. Bidirectional Encoder Representations from Transformers (BERT), uses a multi-headed self-attention mechanism to create deep bidirectional feature representations, able to model the whole context of all words in a sequence. A BERT model uses a transfer learning approach, where it is pre-trained on a large amount of data and can be further fine-tuned for several down-stream tasks. This thesis uses one multilingual, and two dedicated Swedish BERT models, for the task of classifying Swedish texts as of either easy-to-read or standard complexity in their respective domains. The performance on the text classification task using the different models is then compared both with feature representation methods used in earlier studies, as well as with the other BERT models. The results show that all models performed better on the classification task than the previous methods of feature representation. Furthermore, the dedicated Swedish models show better performance than the multilingual model, with the Swedish model pre-trained on more diverse data outperforming the other.
127

Extrakce vztahů mezi entitami / Entity Relationship Extraction

Šimečková, Zuzana January 2020 (has links)
Relationship extraction is the task of extracting semantic relationships between en- tities from a text. We create a Czech Relationship Extraction Dataset (CERED) using distant supervision on Wikidata and Czech Wikipedia. We detail the methodology we used and the pitfalls we encountered. Then we use CERED to fine-tune a neural network model for relationship extraction. We base our model on BERT - a linguistic model pre-trained on extensive unlabeled data. We demonstrate that our model performs well on existing English relationship datasets (Semeval 2010 Task 8, TACRED) and report the results we achieved on CERED. 1
128

Multilingual identification of offensive content in social media

Pàmies Massip, Marc January 2020 (has links)
In today’s society there is a large number of social media users that are free to express their opinion on shared platforms. The socio-cultural differences between the people behind those accounts (in terms of ethnicity, gender, sexual orientation, religion, politics, . . . ) give rise to an important percentage of online discussions that make use of offensive language, which often affects in a negative way the psychological well-being of the victims. In order to address the problem, the endless stream of user-generated content engenders a need to find an accurate and scalable solution to detect offensive language using automated methods. This thesis explores different approaches to the offensiveness detection task focusing on five different languages: Arabic, Danish, English, Greek and Turkish. The results obtained using Support Vector Machines (SVM), Convolutional Neural Networks (CNN) and the Bidirectional Encoder Representations from Transformers (BERT) are compared, achieving state-of-the-art results with some of the methods tested. The effect of the embeddings used, the dataset size, the class imbalance percentage and the addition of sentiment features are studied and analysed, as well as the cross-lingual capabilities of pre-trained multilingual models.
129

Annotating Introductions in the Swedish Parliament Using Machine Learning

Mortensen Blomquist, Jesper January 2022 (has links)
No description available.
130

Exploring Transformer-Based Contextual Knowledge Graph Embeddings : How the Design of the Attention Mask and the Input Structure Affect Learning in Transformer Models

Holmström, Oskar January 2021 (has links)
The availability and use of knowledge graphs have become commonplace as a compact storage of information and for lookup of facts. However, the discrete representation makes the knowledge graph unavailable for tasks that need a continuous representation, such as predicting relationships between entities, where the most probable relationship needs to be found. The need for a continuous representation has spurred the development of knowledge graph embeddings. The idea is to position the entities of the graph relative to each other in a continuous low-dimensional vector space, so that their relationships are preserved, and ideally leading to clusters of entities with similar characteristics. Several methods to produce knowledge graph embeddings have been created, from simple models that minimize the distance between related entities to complex neural models. Almost all of these embedding methods attempt to create an accurate static representation of each entity and relation. However, as with words in natural language, both entities and relations in a knowledge graph hold different meanings in different local contexts.  With the recent development of Transformer models, and their success in creating contextual representations of natural language, work has been done to apply them to graphs. Initial results show great promise, but there are significant differences in archi- tecture design across papers. There is no clear direction on how Transformer models can be best applied to create contextual knowledge graph embeddings. Two of the main differences in previous work is how the attention mask is applied in the model and what input graph structures the model is trained on.  This report explores how different attention masking methods and graph inputs affect a Transformer model (in this report, BERT) on a link prediction task for triples. Models are trained with five different attention masking methods, which to varying degrees restrict attention, and on three different input graph structures (triples, paths, and interconnected triples).  The results indicate that a Transformer model trained with a masked language model objective has the strongest performance on the link prediction task when there are no restrictions on how attention is directed, and when it is trained on graph structures that are sequential. This is similar to how models like BERT learn sentence structure after being exposed to a large number of training samples. For more complex graph structures it is beneficial to encode information of the graph structure through how the attention mask is applied. There also seems to be some indications that the input graph structure affects the models’ capabilities to learn underlying characteristics in the knowledge graph that is trained upon.

Page generated in 0.0369 seconds