• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 140
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 163
  • 93
  • 80
  • 68
  • 67
  • 50
  • 48
  • 47
  • 46
  • 46
  • 46
  • 45
  • 45
  • 42
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

African-American Men and a Journey Through Musical Theatre and Opera

McCloud, Shonn 01 May 2014 (has links)
The purpose of this study is to outline the origins of African-American men in musical theatre, uncover their contributions to the art form, and explore how their legacy is continued today. I was inspired to do this research because through my undergraduate curriculum I have only narrowly studied African-American men in musical theatre and opera history. Upon realizing the lack of attention to this subject matter, not only in my curriculum but in historical resources, I was inspired to address the need for this research. The courses I have taken included Theatre History 1 and 2 and Musical Theatre History 1 and 2; recognition of African-Americans in the theatrical arts has been discussed at a minimal level. The majority of African-American studies in these classes focus on minstrelsy and its contribution to American musical theatre. Minstrelsy was an American form of entertainment consisting of variety acts, dancing, and music during the early 1900s. The shows were a mockery of African-Americans with white (Sometimes Black) men dressing themselves in clown-like costumes and black face paint to depict a caricature of blacks. Throughout my coursework I have found there is still a presence of Minstrelsy in the framework of American musical theatre today. Understanding how minstrelsy influenced musical theatre led me to research Bert Williams, a pioneer African-American performer both in minstrelsy and American theatre. Bert Williams broke racial barriers, allowing African-Americans to perform alongside whites and gain proper show billing. This not only influenced theatre, but the social temperature of the time as well, as the stereotype of African-Americans in society slowly began to be broken down, and whites having the opportunity to see African-Americans as normal people aided in the seeding and progression of the civil rights movement. To further study the works and life of Bert Williams, I learned and performed his iconic song, "Nobody." The song is a commentary of how Williams is overlooked because he is an African-American man. It talks about how he is expected to be funny and make a mockery of himself at the expense of himself. In researching the historical context and gaining an understanding of the content within the song, I was able to better understand other roles I have played in various musicals. This gave me a different perspective to the subject matter of racism within a show. Furthermore, it allowed me to view the evolution of African-American roles in musical theatre, and how they originated in vaudevillian shows. A subject of which I had never explored within my classes. Williams had a very successful and influential career and became the basis for my research. However, as I began my exploration, I realized there were a vast variety of men of color who either contributed as much, if not more, to the progression of African-American men in musical theatre and opera. Bert Williams, Todd Duncan, and Paul Robeson all forged careers in musical theatre and/or opera. These men aided in presenting African-American men in realistic settings and not as stereotyped caricatures. African-American men in musical theatre and opera are typically overlooked for their contribution to the art forms. However, Bert Williams, Todd Duncan, and Paul Robeson were trailblazers for African-American men in musical theatre and opera; utilizing their status and fame to make political change and fight for equal rights, both on and off stage. Their legacy is seen in the art form through the structure of musical theatre, the content of the musical comedy that led to the musical drama, and through the integration of the African-American performer in both musical theatre and opera. In continuation of their legacy, we see more roles in shows for African-American men and a growing interest in shows with African-Americans. The recent opening and revivals of shows like Porgy and Bess, Motown: The Musical, and Kinky Boots all feature leading African-American men on stage. My duty as a young African-American practitioner of both musical theatre and opera is to continue their legacy through both my studies and performance. I am honored to be a part of their legacy, furthering their contributions, and bringing light to their stories through my research and analysis.
122

Extracting Known Side Effects from Summaries of Product Characteristics (SmPCs) Provided in PDF Format by the European Medicines Agency (EMA) using BERT and Python

Buakhao, Rinyarat January 2024 (has links)
Medicines and vaccines have revolutionized disease prevention and treatment, offering numerous benefits. However, they also raise concerns about Adverse Drug Reactions (ADRs), which can have severe consequences. Summaries of Product Characteristics (SmPCs), provided by the European Medicines Agency (EMA), and Structured Product Labelings (SPLs), provided by the Food and Drug Administration (FDA), are valuable sources of information on drug-ADR relations. Understanding these relations is crucial as it contributes to establishing labeled datasets for known ADRs and advancing statistical assessment methods. Uppsala Monitoring Centre (UMC) has developed a text mining pipeline to extract known ADRs from SPLs. While the pipeline works effectively with SPLs, it faces challenges with SmPCs provided in PDF format. This study explores extending the scanner component of the pipeline by utilizing Python PDF extraction libraries to extract text from SmPCs and fine-tuning domain-specific pre-trained BERT-based models for Named Entity Recognition (NER), which is a Natural Language Processing (NLP) task, aiming to identify known ADRs from SmPCs. The investigation finds pypdfium2 [1] to be the optimal Python PDF extraction library, and fine-tuned PubMedBERT—a domain-specific language model pre-training from scratch [2]—for the NER task achieves the best performance in identifying ADRs from SmPCs. The model's performance, evaluated using entity-level evaluation metrics including Exact, Covering, and Overlap match metrics, achieves F1-scores of 0.9138, 0.9268, and 0.9671, respectively, indicating significantly good performance. Consequently, the extension model investigated in this study will be integrated into the existing pipeline by UMC professionals.
123

Funnel Vision

Grainger, David 01 January 2008 (has links)
This paper will talk about the videos and sculptural installation in my thesis exhibition. Shooting videos outside of the studio developed into a project overarching any individual video or its particular signs. Thus, this paper will focus on the video project with examples that follow a timeline of development, rather than the actual 6 videos on display in the exhibit. The two-part sculpture "Deer in the Headlights" is created in the context of these videos, and coexists with them in a specific architectural space. This space, as well as the clichéd meaning of the deer's gaze, have a relation to the title of the show.
124

Context matters : Classifying Swedish texts using BERT's deep bidirectional word embeddings

Holmer, Daniel January 2020 (has links)
When classifying texts using a linear classifier, the texts are commonly represented as feature vectors. Previous methods to represent features as vectors have been unable to capture the context of individual words in the texts, in theory leading to a poor representation of natural language. Bidirectional Encoder Representations from Transformers (BERT), uses a multi-headed self-attention mechanism to create deep bidirectional feature representations, able to model the whole context of all words in a sequence. A BERT model uses a transfer learning approach, where it is pre-trained on a large amount of data and can be further fine-tuned for several down-stream tasks. This thesis uses one multilingual, and two dedicated Swedish BERT models, for the task of classifying Swedish texts as of either easy-to-read or standard complexity in their respective domains. The performance on the text classification task using the different models is then compared both with feature representation methods used in earlier studies, as well as with the other BERT models. The results show that all models performed better on the classification task than the previous methods of feature representation. Furthermore, the dedicated Swedish models show better performance than the multilingual model, with the Swedish model pre-trained on more diverse data outperforming the other.
125

Extrakce vztahů mezi entitami / Entity Relationship Extraction

Šimečková, Zuzana January 2020 (has links)
Relationship extraction is the task of extracting semantic relationships between en- tities from a text. We create a Czech Relationship Extraction Dataset (CERED) using distant supervision on Wikidata and Czech Wikipedia. We detail the methodology we used and the pitfalls we encountered. Then we use CERED to fine-tune a neural network model for relationship extraction. We base our model on BERT - a linguistic model pre-trained on extensive unlabeled data. We demonstrate that our model performs well on existing English relationship datasets (Semeval 2010 Task 8, TACRED) and report the results we achieved on CERED. 1
126

Multilingual identification of offensive content in social media

Pàmies Massip, Marc January 2020 (has links)
In today’s society there is a large number of social media users that are free to express their opinion on shared platforms. The socio-cultural differences between the people behind those accounts (in terms of ethnicity, gender, sexual orientation, religion, politics, . . . ) give rise to an important percentage of online discussions that make use of offensive language, which often affects in a negative way the psychological well-being of the victims. In order to address the problem, the endless stream of user-generated content engenders a need to find an accurate and scalable solution to detect offensive language using automated methods. This thesis explores different approaches to the offensiveness detection task focusing on five different languages: Arabic, Danish, English, Greek and Turkish. The results obtained using Support Vector Machines (SVM), Convolutional Neural Networks (CNN) and the Bidirectional Encoder Representations from Transformers (BERT) are compared, achieving state-of-the-art results with some of the methods tested. The effect of the embeddings used, the dataset size, the class imbalance percentage and the addition of sentiment features are studied and analysed, as well as the cross-lingual capabilities of pre-trained multilingual models.
127

Annotating Introductions in the Swedish Parliament Using Machine Learning

Mortensen Blomquist, Jesper January 2022 (has links)
No description available.
128

Exploring Transformer-Based Contextual Knowledge Graph Embeddings : How the Design of the Attention Mask and the Input Structure Affect Learning in Transformer Models

Holmström, Oskar January 2021 (has links)
The availability and use of knowledge graphs have become commonplace as a compact storage of information and for lookup of facts. However, the discrete representation makes the knowledge graph unavailable for tasks that need a continuous representation, such as predicting relationships between entities, where the most probable relationship needs to be found. The need for a continuous representation has spurred the development of knowledge graph embeddings. The idea is to position the entities of the graph relative to each other in a continuous low-dimensional vector space, so that their relationships are preserved, and ideally leading to clusters of entities with similar characteristics. Several methods to produce knowledge graph embeddings have been created, from simple models that minimize the distance between related entities to complex neural models. Almost all of these embedding methods attempt to create an accurate static representation of each entity and relation. However, as with words in natural language, both entities and relations in a knowledge graph hold different meanings in different local contexts.  With the recent development of Transformer models, and their success in creating contextual representations of natural language, work has been done to apply them to graphs. Initial results show great promise, but there are significant differences in archi- tecture design across papers. There is no clear direction on how Transformer models can be best applied to create contextual knowledge graph embeddings. Two of the main differences in previous work is how the attention mask is applied in the model and what input graph structures the model is trained on.  This report explores how different attention masking methods and graph inputs affect a Transformer model (in this report, BERT) on a link prediction task for triples. Models are trained with five different attention masking methods, which to varying degrees restrict attention, and on three different input graph structures (triples, paths, and interconnected triples).  The results indicate that a Transformer model trained with a masked language model objective has the strongest performance on the link prediction task when there are no restrictions on how attention is directed, and when it is trained on graph structures that are sequential. This is similar to how models like BERT learn sentence structure after being exposed to a large number of training samples. For more complex graph structures it is beneficial to encode information of the graph structure through how the attention mask is applied. There also seems to be some indications that the input graph structure affects the models’ capabilities to learn underlying characteristics in the knowledge graph that is trained upon.
129

Multilingual Zero-Shot and Few-Shot Causality Detection

Reimann, Sebastian Michael January 2021 (has links)
Relations that hold between causes and their effects are fundamental for a wide range of different sectors. Automatically finding sentences that express such relations may for example be of great interest for the economy or political institutions. However, for many languages other than English, a lack of training resources for this task needs to be dealt with. In recent years, large, pretrained transformer-based model architectures have proven to be very effective for tasks involving cross-lingual transfer such as cross-lingual language inference, as well as multilingual named entity recognition, POS-tagging and dependency parsing, which may hint at similar potentials for causality detection. In this thesis, we define causality detection as a binary labelling problem and use cross-lingual transfer to alleviate data scarcity for German and Swedish by using three different classifiers that make either use of multilingual sentence embeddings obtained from a pretrained encoder or pretrained multilingual language models. The source languages in most of our experiments will be English, for Swedish we however also use a small German training set and a combination of English and German training data.  We try out zero-shot transfer as well as making use of limited amounts of target language data either as a development set or as additional training data in a few-shot setting. In the latter scenario, we explore the impact of varying sizes of training data. Moreover, the problem of data scarcity in our situation also makes it necessary to work with data from different annotation projects. We also explore how much this would impact our result. For German as a target language, our results in a zero-shot scenario expectedly fall short in comparison with monolingual experiments, but F1-macro scores between 60 and 65 in cases where annotation did not differ drastically still signal that it was possible to transfer at least some knowledge. When introducing only small amounts of target language data, already notable improvements were observed and with the full German training data of about 3,000 sentences combined with the most suitable English data set, the performance for German in some scenarios even almost matches the state of the art for monolingual experiments on English. The best zero-shot performance on the Swedish data was even outperforming the scores achieved for German. However, due to problems with the additional Swedish training data, we were not able to improve upon the zero-shot performance in a few-shot setting in a similar manner as it was the case for German.
130

The past, present or future? : A comparative NLP study of Naive Bayes, LSTM and BERT for classifying Swedish sentences based on their tense

Navér, Norah January 2021 (has links)
Natural language processing is a field in computer science that is becoming increasingly important. One important part of NLP is the ability to sort text to the past, present or future, depending on when the event came or will come about. The objective of this thesis was to use text classification to classify Swedish sentences based on their tense, either past, present or future. Furthermore, the objective was also to compare how lemmatisation would affect the performance of the models. The problem was tackled by implementing three machine learning models on both lemmatised and not lemmatised data. The machine learning models were Naive Bayes, LSTM and BERT. The result showed that the overall performance was affected negatively when the data was lemmatised. The best performing model was BERT with an accuracy of 96.3\%. The result was useful as the best performing model had very high accuracy and performed well on newly constructed sentences. / Språkteknologi är område inom datavetenskap som som har blivit allt viktigare. En viktig del av språkteknologi är förmågan att sortera texter till det förflutna, nuet eller framtiden, beroende på när en händelse skedde eller kommer att ske. Syftet med denna avhandling var att använda textklassificering för att klassificera svenska meningar baserat på deras tempus, antingen dåtid, nutid eller framtid. Vidare var syftet även att jämföra hur lemmatisering skulle påverka modellernas prestanda. Problemet hanterades genom att implementera tre maskininlärningsmodeller på både lemmatiserade och icke lemmatiserade data. Maskininlärningsmodellerna var Naive Bayes, LSTM och BERT. Resultatet var att den övergripande prestandan påverkades negativt när datan lemmatiserade. Den bäst presterande modellen var BERT med en träffsäkerhet på 96,3 \%. Resultatet var användbart eftersom den bäst presterande modellen hade mycket hög träffsäkerhet och fungerade bra på nybyggda meningar.

Page generated in 0.0243 seconds