• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 9
  • 9
  • 8
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Deep Neural Network-Based Model for Named Entity Recognition for Hindi Language

Sharma, Richa, Morwal, Sudha, Agarwal, Basant, Chandra, Ramesh, Khan, Mohammad S. 01 October 2020 (has links)
The aim of this work is to develop efficient named entity recognition from the given text that in turn improves the performance of the systems that use natural language processing (NLP). The performance of IoT-based devices such as Alexa and Cortana significantly depends upon an efficient NLP model. To increase the capability of the smart IoT devices in comprehending the natural language, named entity recognition (NER) tools play an important role in these devices. In general, the NER is a two-step process that initially the proper nouns are identified from text and then classify them into predefined categories of entities such as person, location, measure, organization and time. NER is often performed as a subtask while processing natural languages which increases the accuracy level of a NLP task. In this paper, we propose deep neural network architecture for named entity recognition for the resource-scarce language Hindi, based on convolutional neural network (CNN), bidirectional long short-term memory (Bi-LSTM) neural network and conditional random field (CRF). In the proposed approach, initially, we use skip-gram word2vec model and GloVe model to represent words in semantic vectors which are further used in different deep neural network-based architectures. In the proposed approach, we use character- and word-level embedding to represent the text that includes information at fine-grained level. Due to the use of character-level embeddings, the proposed model is robust for the out-of-vocabulary words. Experimental results show that the combination of Bi-LSTM, CNN and CRF algorithms performs better as compared to the other baseline methods such as recurrent neural network, long short-term memory and Bi-LSTM individually.
2

Flexible Structured Prediction in Natural Language Processing with Partially Annotated Corpora

Xiao Zhang (8776265) 29 April 2020 (has links)
<div>Structured prediction makes coherent decisions as structured objects to present the interrelations of these predicted variables. They have been widely used in many areas, such as bioinformatics, computer vision, speech recognition, and natural language processing. Machine Learning with reduced supervision aims to leverage the laborious and error-prone annotation effects and benefit the low-resource languages. In this dissertation we study structured prediction with reduced supervision for two sets of problems, sequence labeling and dependency parsing, both of which are representatives of structured prediction problems in NLP. We investigate three different approaches.</div><div> </div><div>The first approach is learning with modular architecture by task decomposition. By decomposing the labels into location sub-label and type sub-label, we designed neural modules to tackle these sub-labels respectively, with an additional module to infuse the information. The experiments on the benchmark datasets show the modular architecture outperforms existing models and can make use of partially labeled data together with fully labeled data to improve on the performance of using fully labeled data alone.</div><div><br></div><div>The second approach builds the neural CRF autoencoder (NCRFAE) model that combines a discriminative component and a generative component for semi-supervised sequence labeling. The model has a unified structure of shared parameters, using different loss functions for labeled and unlabeled data. We developed a variant of the EM algorithm for optimizing the model with tractable inference. The experiments on several languages in the POS tagging task show the model outperforms existing systems in both supervised and semi-supervised setup.</div><div><br></div><div>The third approach builds two models for semi-supervised dependency parsing, namely local autoencoding parser (LAP) and global autoencoding parser (GAP). LAP assumes the chain-structured sentence has a latent representation and uses this representation to construct the dependency tree, while GAP treats the dependency tree itself as a latent variable. Both models have unified structures for sentence with and without annotated parse tree. The experiments on several languages show both parsers can use unlabeled sentences to improve on the performance with labeled sentences alone, and LAP is faster while GAP outperforms existing models.</div>
3

Empirical study and multi-task learning exploration for neural sequence labeling models

Lu, Peng 04 1900 (has links)
No description available.
4

Rozpoznávání událostí ve fotbalu z prostoročasových dat objektů ve hře / Football Event Recognition for Spatiotemporal Data of Gaming Objects

Čížek, Tomáš January 2018 (has links)
This diploma thesis deals with automatic soccer event detection . Its goal is to introduce reader to this issue , discuss possible ways of solution of this task and then implement event detection . This work aims at event recognition using spatio - temporal data of gaming objects . Introduced way of dealing with event detection lies in its converting to sequence labeling task . Then such task is solved using LSTM recurrent neural networks . Lastly , result of sequence labeling is interpreted as detected events . Library for event detection has been created as the output of this work . This library allow user to experiment with different variants how to formulate event detection as sequence labeling task .
5

Semisupervizované hluboké učení v označování sekvencí / Semi-supervised deep learning in sequence labeling

Páll, Juraj Eduard January 2019 (has links)
Sequence labeling is a type of machine learning problem that involves as- signing a label to each sequence member. Deep learning has shown good per- formance for this problem. However, one disadvantage of this approach is its requirement of having a large amount of labeled data. Semi-supervised learning mitigates this problem by using cheaper unlabeled data together with labeled data. Currently, usage of semi-supervised deep learning for sequence labeling is limited. Therefore, the focus of this thesis is on the application of semi-super- vised deep learning in sequence labeling. Existing semi-supervised deep learning approaches are examined, and approaches for sequence labeling are proposed. The proposed approaches were implemented and experimentally evaluated on named-entity recognition and part-of-speech tagging tasks.
6

Monolingual and Cross-Lingual Survey Response Annotation

Zhao, Yahui January 2023 (has links)
Multilingual natural language processing (NLP) is increasingly recognized for its potential in processing diverse text-type data, including those from social media, reviews, and technical reports. Multilingual language models like mBERT and XLM-RoBERTa (XLM-R) play a pivotal role in multilingual NLP. Notwithstanding their capabilities, the performance of these models largely relies on the availability of annotated training data. This thesis employs the multilingual pre-trained model XLM-R to examine its efficacy in sequence labelling to open-ended questions on democracy across multilingual surveys. Traditional annotation practices have been labour-intensive and time-consuming, with limited automation attempts. Previous studies often translated multilingual data into English, bypassing the challenges and nuances of native languages. Our study explores automatic multilingual annotation at the token level for democracy survey responses in five languages: Hungarian, Italian, Polish, Russian, and Spanish. The results reveal promising F1 scores, indicating the feasibility of using multilingual models for such tasks. However, the performance of these models is closely tied to the quality and nature of the training set. This research paves the way for future experiments and model adjustments, underscoring the importance of refining training data and optimizing model techniques for enhanced classification accuracy.
7

Použití hlubokých kontextualizovaných slovních reprezentací založených na znacích pro neuronové sekvenční značkování / Deep contextualized word embeddings from character language models for neural sequence labeling

Lief, Eric January 2019 (has links)
A family of Natural Language Processing (NLP) tasks such as part-of- speech (PoS) tagging, Named Entity Recognition (NER), and Multiword Expression (MWE) identification all involve assigning labels to sequences of words in text (sequence labeling). Most modern machine learning approaches to sequence labeling utilize word embeddings, learned representations of text, in which words with similar meanings have similar representations. Quite recently, contextualized word embeddings have garnered much attention because, unlike pretrained context- insensitive embeddings such as word2vec, they are able to capture word meaning in context. In this thesis, I evaluate the performance of different embedding setups (context-sensitive, context-insensitive word, as well as task-specific word, character, lemma, and PoS) on the three abovementioned sequence labeling tasks using a deep learning model (BiLSTM) and Portuguese datasets. v
8

Predictive models for career progression

Soliman, Zakaria 08 1900 (has links)
No description available.
9

Knowledge-Enabled Entity Extraction

Al-Olimat, Hussein S. January 2019 (has links)
No description available.

Page generated in 0.0816 seconds