• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 276
  • 31
  • 25
  • 22
  • 9
  • 8
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 429
  • 205
  • 160
  • 155
  • 150
  • 136
  • 112
  • 102
  • 92
  • 80
  • 77
  • 72
  • 72
  • 71
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Punctuation Restoration as Post-processing Step for Swedish Language Automatic Speech Recognition

Gupta, Ishika January 2023 (has links)
This thesis focuses on the Swedish language, where punctuation restoration, especially as a postprocessing step for the output of Automatic Speech Recognition (ASR) applications, needs furtherresearch. I have collaborated with NewsMachine AB, a company that provides large-scale mediamonitoring services for its clients, for which it employs ASR technology to convert spoken contentinto text.This thesis follows an approach initially designed for high-resource languages such as English. Themethod is based on KB-BERT, a pre-trained Swedish neural network language model developedby the National Library of Sweden. The project uses KB-BERT with a Bidirectional Long-ShortTerm Memory (BiLSTM) layer on top for the task of punctuation restoration. The model is finetuned using the TED Talk 2020 dataset in Swedish, which is acquired from OPUS (an open-sourceparallel corpus). The punctuation marks comma, period, question mark, and colon are considered for this project. A comparative analysis is conducted between two KB-BERT models: bertbase-swedish-cased and albert-base-swedish-cased-alpha. The fine-tuned Swedish BERT-BiLSTMmodel, trained on 5 classes, achieved an overall F1-score of 81.6%, surpassing the performance ofthe ALBERT-BiLSTM model, which was also trained on 5 classes and obtained an overall F1-scoreof 66.6%. Additionally, the BERT-BiLSTM model, trained on 4 classes (excluding colon), outperformed prestoBERT, an existing model designed for the same task in Swedish, with an overallF1-score of 82.8%. In contrast, prestoBERT achieved an overall F1-score of 78.9%.As a further evaluation of the model’s performance on ASR transcribed text, noise was injectedbased on four probabilities (0.05, 0.1, 0.15, 0.2) into a copy of the test data in the form of threeword-level errors (deletion, substitution, and insertion). The performance of the BERT-BiLSTMmodel substantially decreased for all the errors as the probability of noise injected increased. Incontrast, the model still performed comparatively better when dealing with deletion errors as compared to substitution and insertion errors. Lastly, the data resources received from NewsMachineAB were used to perform a qualitative assessment of how the model performs in punctuating realtranscribed data as compared to human judgment.
262

An AI-based System for Assisting Planners in a Supply Chain with Email Communication

Dantu, Sai Shreya Spurthi, Yadlapalli, Akhilesh January 2023 (has links)
Background: Communication plays a crucial role in supply chain management (SCM) as it facilitates the flow of information, materials, and goods across various stages of the supply chain. In the context of supply planning, each planner manages thousands of supply chain entities and spends a lot of time reading and responding to high volumes of emails related to part orders, delays, and backorders that can lead to information overload and hinder workflow and decision-making. Therefore, streamlining communication and enhancing email management are essential for optimizing supply chain efficiency. Objectives: This study aims to create an automated system that can summarize email conversations between planners, suppliers, and other stakeholders. The goal is to increase communication efficiency using Natural Language Processing (NLP) algorithms to extract important information from lengthy conversations. Additionally, the study will explore the effectiveness of using conditional random fields (CRF) to filter out irrelevant content during preprocessing. Methods: We chose four advanced pre-trained abstractive dialogue summarization models, BART, PEGASUS, T5, and CODS, and evaluation metrics, ROUGE and BERTScore, to compare their performance in effectively summarizing our email conversations. We used CRF to preprocess raw data from around 400 planner-supplier email conversations to extract important sentences in a dialogue format and label them with specific dialogue act tags. We then manually summarized the 400 conversations and fine-tuned the four chosen models. Finally, we evaluated the models using ROUGE and BERTScore metrics to determine their similarity to human references. Results: The results show that the performance of the summarization models has significantly improved after fine-tuning the models with domain-specific data. The BART model achieved the highest ROUGE-1 score of 0.65, ROUGE-L score of 0.56, and BERTScore of 0.95 compared to other models. Additionally, CRF-based preprocessing proved to be crucial in extracting essential information and minimizing unnecessary details for the summarization process. Conclusions: This study shows that advanced NLP techniques can make supply chain communication workflows more efficient. The BART-based email summarization tool that we created showed great potential in giving important insights and helping planners deal with information overload.
263

Automating Question Generation Given the Correct Answer / Automatisering av frågegenerering givet det rätta svaret

Cao, Haoliang January 2020 (has links)
In this thesis, we propose an end-to-end deep learning model for a question generation task. Given a Wikipedia article written in English and a segment of text appearing in the article, the model can generate a simple question whose answer is the given text segment. The model is based on an encoder-decoder architecture. Our experiments show that a model with a fine-tuned BERT encoder and a self-attention decoder give the best performance. We also propose an evaluation metric for the question generation task, which evaluates both syntactic correctness and relevance of the generated questions. According to our analysis on sampled data, the new metric is found to give better evaluation compared to other popular metrics for sequence to sequence tasks. / I den här avhandlingen presenteras en djup neural nätverksmodell för en frågeställningsuppgift. Givet en Wikipediaartikel skriven på engelska och ett textsegment i artikeln kan modellen generera en enkel fråga vars svar är det givna textsegmentet. Modellen är baserad på en kodar-avkodararkitektur (encoderdecoder architecture). Våra experiment visar att en modell med en finjusterad BERT-kodare och en självuppmärksamhetsavkodare (self-attention decoder) ger bästa prestanda. Vi föreslår också en utvärderingsmetrik för frågeställningsuppgiften, som utvärderar både syntaktisk korrekthet och relevans för de genererade frågorna. Enligt vår analys av samplade data visar det sig att den nya metriken ger bättre utvärdering jämfört med andra populära metriker för utvärdering.
264

Investigating the Effect of Complementary Information Stored in Multiple Languages on Question Answering Performance : A Study of the Multilingual-T5 for Extractive Question Answering / Vad är effekten av kompletterande information lagrad i flera språk på frågebesvaring : En undersökning av multilingual-T5 för frågebesvaring

Aurell Hansson, Björn January 2021 (has links)
Extractive question answering is a popular domain in the field of natural language processing, where machine learning models are tasked with answering questions given a context. Historically the field has been centered on monolingual models, but recently more and more multilingual models have been developed, such as Google’s MT5 [1]. Because of this, machine translations of English have been used when training and evaluating these models, but machine translations can be degraded and do not always reflect their target language fairly. This report investigates if complementary information stored in other languages can improve monolingual QA performance for languages where only machine translations are available. It also investigates if exposure to more languages can improve zero-shot cross-lingual QA performance (i.e. when the question and answer do not have matching languages) by providing complementary information. We fine-tune 3 different MT5 models on QA datasets consisting of machine translations, as well as one model on the datasets together in combination with 3 other datasets that are not translations. We then evaluate the different models on the MLQA and XQuAD datasets. The results show that for 2 out of the 3 languages evaluated, complementary information stored in other languages had a positive effect on the QA performance of the MT5. For zero-shot cross-lingual QA, the complementary information offered by the fused model lead to improved performance compared to 2/3 of the MT5 models trained only on translated data, indicating that complementary information from other languages do not offer any improvement in this regard. / Frågebesvaring (QA) är en populär domän inom naturlig språkbehandling, där maskininlärningsmodeller har till uppgift att svara på frågor. Historiskt har fältet varit inriktat på enspråkiga modeller, men nyligen har fler och fler flerspråkiga modeller utvecklats, till exempel Googles MT5 [1]. På grund av detta har maskinöversättningar av engelska använts vid träning och utvärdering av dessa modeller, men maskinöversättningar kan vara försämrade och speglar inte alltid deras målspråk rättvist. Denna rapport undersöker om kompletterande information som lagras i andra språk kan förbättra enspråkig QA-prestanda för språk där endast maskinöversättningar är tillgängliga. Den undersöker också om exponering för fler språk kan förbättra QA-prestanda på zero-shot cross-lingual QA (dvs. där frågan och svaret inte har matchande språk) genom att tillhandahålla kompletterande information. Vi finjusterar 3 olika modeller på QA-datamängder som består av maskinöversättningar, samt en modell på datamängderna tillsammans i kombination med 3 andra datamängder som inte är översättningar. Vi utvärderar sedan de olika modellerna på MLQA- och XQuAD-datauppsättningarna. Resultaten visar att för 2 av de 3 utvärderade språken hade kompletterande information som lagrats i andra språk en positiv effekt på QA-prestanda. För zero-shot cross-lingual QA leder den kompletterande informationen som erbjuds av den sammansmälta modellen till förbättrad prestanda jämfört med 2/3 av modellerna som tränats endast på översättningar, vilket indikerar att kompletterande information från andra språk inte ger någon förbättring i detta avseende.
265

Email Thread Summarization with Conditional Random Fields

Shockley, Darla Magdalene 23 August 2010 (has links)
No description available.
266

Low-resource Semantic Role Labeling Through Improved Transfer Learning

Lindbäck, Hannes January 2024 (has links)
For several more complex tasks, such as semantic role labeling (SRL), large annotated datasets are necessary. For smaller and lower-resource languages, these are not readily available. As a way to overcome this data bottleneck, this thesis investigates the possibilities of using transfer learning from a high-resource language to a low-resource language, and then perform zero-shot SRL on the low-resource language. We additionally investigate if the transfer-learning can be improved by freezing the parameters of a layer in the pre-trained model, leveraging the model to instead focus on learning the parameters of the layers necessary for the task. By training models in English and then evaluating on Spanish, Catalan, German and Chinese CoNLL-2009 data, we find that transfer learning zero-shot SRL can be an effective technique, and in certain cases outperform models trained on low amounts of data. We also find that the results improve when freezing parameters of the lower layers of the model, the layers focused on surface tasks, as this allowed the model to improve the layers necessary for SRL.
267

Natural language processing (NLP) in Artificial Intelligence (AI): a functional linguistic perspective

Panesar, Kulvinder 07 October 2020 (has links)
Yes / This chapter encapsulates the multi-disciplinary nature that facilitates NLP in AI and reports on a linguistically orientated conversational software agent (CSA) (Panesar 2017) framework sensitive to natural language processing (NLP), language in the agent environment. We present a novel computational approach of using the functional linguistic theory of Role and Reference Grammar (RRG) as the linguistic engine. Viewing language as action, utterances change the state of the world, and hence speakers and hearer’s mental state change as a result of these utterances. The plan-based method of discourse management (DM) using the BDI model architecture is deployed, to support a greater complexity of conversation. This CSA investigates the integration, intersection and interface of the language, knowledge, speech act constructions (SAC) as a grammatical object, and the sub-model of BDI and DM for NLP. We present an investigation into the intersection and interface between our linguistic and knowledge (belief base) models for both dialogue management and planning. The architecture has three-phase models: (1) a linguistic model based on RRG; (2) Agent Cognitive Model (ACM) with (a) knowledge representation model employing conceptual graphs (CGs) serialised to Resource Description Framework (RDF); (b) a planning model underpinned by BDI concepts and intentionality and rational interaction; and (3) a dialogue model employing common ground. Use of RRG as a linguistic engine for the CSA was successful. We identify the complexity of the semantic gap of internal representations with details of a conceptual bridging solution.
268

Improving the Accessibility of Arabic Electronic Theses and Dissertations (ETDs) with Metadata and Classification

Abdelrahman, Eman January 2021 (has links)
Much research work has been done to extract data from scientific papers, journals, and articles. However, Electronic Theses and Dissertations (ETDs) remain an unexplored genre of data in the research fields of natural language processing and machine learning. Moreover, much of the related research involved data that is in the English language. Arabic data such as news and tweets have begun to receive some attention in the past decade. However, Arabic ETDs remain an untapped source of data despite the vast number of benefits to students and future generations of scholars. Some ways of improving the browsability and accessibility of data include data annotation, indexing, parsing, translation, and classification. Classification is essential for the searchability and management of data, which can be manual or automated. The latter is beneficial when handling growing volumes of data. There are two main roadblocks to performing automatic subject classification on Arabic ETDs. The first is the unavailability of a public corpus of Arabic ETDs. The second is the Arabic language’s linguistic complexity, especially in academic documents. This research presents the Otrouha project, which aims at building a corpus of key metadata of Arabic ETDs as well as providing a methodology for their automatic subject classification. The first goal is aided by collecting data from the AskZad Digital Library. The second goal is achieved by exploring different machine learning and deep learning techniques. The experiments’ results show that deep learning using pretrained language models gave the highest classification performance, indicating that language models significantly contribute to natural language understanding. / M.S. / An Electronic Thesis or Dissertation (ETD) is an openly-accessible electronic version of a graduate student’s research thesis or dissertation. It documents their main research effort that has taken place and becomes available in the University Library instead of a paper copy. Over time, collections of ETDs have been gathered and made available online through different digital libraries. ETDs are a valuable source of information for scholars and researchers, as well as librarians. With the digitalization move in most Middle Eastern Universities, the need to make Arabic ETDs more accessible significantly increases as their numbers increase. One of the ways to improve their accessibility and searchability is through providing automatic classification instead of manual classification. This thesis project focuses on building a corpus of metadata of Arabic ETDs and building a framework for their automatic subject classification. This is expected to pave the way for more exploratory research on this valuable genre of data.
269

Spelling Normalisation and Linguistic Analysis of Historical Text for Information Extraction

Pettersson, Eva January 2016 (has links)
Historical text constitutes a rich source of information for historians and other researchers in humanities. Many texts are however not available in an electronic format, and even if they are, there is a lack of NLP tools designed to handle historical text. In my thesis, I aim to provide a generic workflow for automatic linguistic analysis and information extraction from historical text, with spelling normalisation as a core component in the pipeline. In the spelling normalisation step, the historical input text is automatically normalised to a more modern spelling, enabling the use of existing taggers and parsers trained on modern language data in the succeeding linguistic analysis step. In the final information extraction step, certain linguistic structures are identified based on the annotation labels given by the NLP tools, and ranked in accordance with the specific information need expressed by the user. An important consideration in my implementation is that the pipeline should be applicable to different languages, time periods, genres, and information needs by simply substituting the language resources used in each module. Furthermore, the reuse of existing NLP tools developed for the modern language is crucial, considering the lack of linguistically annotated historical data combined with the high variability in historical text, making it hard to train NLP tools specifically aimed at analysing historical text. In my evaluation, I show that spelling normalisation can be a very useful technique for easy access to historical information content, even in cases where there is little (or no) annotated historical training data available. For the specific information extraction task of automatically identifying verb phrases describing work in Early Modern Swedish text, 91 out of the 100 top-ranked instances are true positives in the best setting.
270

Automatic Text Ontological Representation and Classification via Fundamental to Specific Conceptual Elements (TOR-FUSE)

Razavi, Amir Hossein 16 July 2012 (has links)
In this dissertation, we introduce a novel text representation method mainly used for text classification purpose. The presented representation method is initially based on a variety of closeness relationships between pairs of words in text passages within the entire corpus. This representation is then used as the basis for our multi-level lightweight ontological representation method (TOR-FUSE), in which documents are represented based on their contexts and the goal of the learning task. The method is unlike the traditional representation methods, in which all the documents are represented solely based on the constituent words of the documents, and are totally isolated from the goal that they are represented for. We believe choosing the correct granularity of representation features is an important aspect of text classification. Interpreting data in a more general dimensional space, with fewer dimensions, can convey more discriminative knowledge and decrease the level of learning perplexity. The multi-level model allows data interpretation in a more conceptual space, rather than only containing scattered words occurring in texts. It aims to perform the extraction of the knowledge tailored for the classification task by automatic creation of a lightweight ontological hierarchy of representations. In the last step, we will train a tailored ensemble learner over a stack of representations at different conceptual granularities. The final result is a mapping and a weighting of the targeted concept of the original learning task, over a stack of representations and granular conceptual elements of its different levels (hierarchical mapping instead of linear mapping over a vector). Finally the entire algorithm is applied to a variety of general text classification tasks, and the performance is evaluated in comparison with well-known algorithms.

Page generated in 0.0578 seconds