• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 2
  • Tagged with
  • 9
  • 9
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Using EEG to decode semantics during an artificial language learning task

Foster, Chris 04 December 2018 (has links)
The study of semantics in the brain explores how the brain represents, processes, and learns the meaning of language. In this thesis we show both that semantic representations can be decoded from electroencephalography data, and that we can detect the emergence of semantic representations as participants learn an artificial language mapping. We collected electroencephalography data while participants performed a reinforcement learning task that simulates learning an artificial language, and then developed a machine learning semantic representation model to predict semantics as a word-to-symbol mapping was learned. Our results show that 1) we can detect a reward positivity when participants correctly identify a symbol's meaning; 2) the reward positivity diminishes for subsequent correct trials; 3) we can detect neural correlates of the semantic mapping as it is formed; and 4) the localization of the neural representations is heavily distributed. Our work shows that language learning can be monitored using EEG, and that the semantics of even newly-learned word mappings can be detected using EEG. / Graduate
2

Thoughts don't have Colour, do they? : Finding Semantic Categories of Nouns and Adjectives in Text Through Automatic Language Processing / Generering av semantiska kategorier av substantiv och adjektiv genom automatisk textbearbetning

Fallgren, Per January 2017 (has links)
Not all combinations of nouns and adjectives are possible and some are clearly more fre- quent than other. With this in mind this study aims to construct semantic representations of the two types of parts-of-speech, based on how they occur with each other. By inves- tigating these ideas via automatic natural language processing paradigms the study aims to find evidence for a semantic mutuality between nouns and adjectives, this notion sug- gests that the semantics of a noun can be captured by its corresponding adjectives, and vice versa. Furthermore, a set of proposed categories of adjectives and nouns, based on the ideas of Gärdenfors (2014), is presented that hypothetically are to fall in line with the produced representations. Four evaluation methods were used to analyze the result rang- ing from subjective discussion of nearest neighbours in vector space to accuracy generated from manual annotation. The result provided some evidence for the hypothesis which suggests that further research is of value.
3

Word Vector Representations using Shallow Neural Networks

Adewumi, Oluwatosin January 2021 (has links)
This work highlights some important factors for consideration when developing word vector representations and data-driven conversational systems. The neural network methods for creating word embeddings have gained more prominence than their older, count-based counterparts.However, there are still challenges, such as prolonged training time and the need for more data, especially with deep neural networks. Shallow neural networks with lesser depth appear to have the advantage of less complexity, however, they also face challenges, such as sub-optimal combination of hyper-parameters which produce sub-optimal models. This work, therefore, investigates the following research questions: "How importantly do hyper-parameters influence word embeddings’ performance?" and "What factors are important for developing ethical and robust conversational systems?" In answering the questions, various experiments were conducted using different datasets in different studies. The first study investigates, empirically, various hyper-parameter combinations for creating word vectors and their impact on a few natural language processing (NLP) downstream tasks: named entity recognition (NER) and sentiment analysis (SA). The study shows that optimal performance of embeddings for downstream \acrshort{nlp} tasks depends on the task at hand.It also shows that certain combinations give strong performance across the tasks chosen for the study. Furthermore, it shows that reasonably smaller corpora are sufficient or even produce better models in some cases and take less time to train and load. This is important, especially now that environmental considerations play prominent role in ethical research. Subsequent studies build on the findings of the first and explore the hyper-parameter combinations for Swedish and English embeddings for the downstream NER task. The second study presents the new Swedish analogy test set for evaluation of Swedish embeddings. Furthermore, it shows that character n-grams are useful for Swedish, a morphologically rich language. The third study shows that broad coverage of topics in a corpus appears to be important to produce better embeddings and that noise may be helpful in certain instances, though they are generally harmful. Hence, relatively smaller corpus can show better performance than a larger one, as demonstrated in the work with the smaller Swedish Wikipedia corpus against the Swedish Gigaword. The argument is made, in the final study (in answering the second question) from the point of view of the philosophy of science, that the near-elimination of the presence of unwanted bias in training data and the use of foralike the peer-review, conferences, and journals to provide the necessary avenues for criticism and feedback are instrumental for the development of ethical and robust conversational systems.
4

SlimRank: um modelo de seleção de respostas para perguntas de consumidores / SlimRank: an answer selection model for consumer questions

Criscuolo, Marcelo 16 November 2017 (has links)
A disponibilidade de conteúdo gerado por usuários em sites colaborativos de perguntas e respostas tem impulsionado o avanço de modelos de Question Answering (QA) baseados em reúso. Essa abordagem pode ser implementada por meio da tarefa de seleção de respostas (Answer Selection, AS), que consiste em encontrar a melhor resposta para uma dada pergunta em um conjunto pré-selecionado de respostas candidatas. Nos últimos anos, abordagens baseadas em vetores distribucionais e em redes neurais profundas, em particular em redes neurais convolutivas (CNNs), têm apresentado bons resultados na tarefa de AS. Contudo, a maioria dos modelos é avaliada sobre córpus de perguntas objetivas e bem formadas, contendo poucas palavras. Raramente estruturas textuais complexas são consideradas. Perguntas de consumidores, comuns em sites colaborativos, podem ser bastante complexas. Em geral, são representadas por múltiplas frases inter-relacionadas, que apresentam pouca objetividade, vocabulário leigo e, frequentemente, contêm informações em excesso. Essas características aumentam a dificuldade da tarefa de AS. Neste trabalho, propomos um modelo de seleção de respostas para perguntas de consumidores. São contribuições deste trabalho: (i) uma definição para o objeto de pesquisa perguntas de consumidores; (ii) um novo dataset desse tipo de pergunta, chamado MilkQA; e (iii) um modelo de seleção de respostas, chamado SlimRank. O MilkQA foi criado a partir de um arquivo de perguntas e respostas coletadas pelo serviço de atendimento de uma renomada instituição pública de pesquisa agropecuária (Embrapa). Anotadores guiados pela definição de perguntas de consumidores proposta neste trabalho selecionaram 2,6 mil pares de perguntas e respostas contidas nesse arquivo. A análise dessas perguntas levou ao desenvolvimento do modelo SlimRank, que combina representação de textos na forma de grafos semânticos com arquiteturas de CNNs. O SlimRank foi avaliado no dataset MilkQA e comparado com baselines e dois modelos do estado da arte. Os resultados alcançados pelo SlimRank foram bastante superiores aos resultados dos baselines, e compatíveis com resultados de modelos do estado da arte; porém, com uma significativa redução do tempo computacional. Acreditamos que a representação de textos na forma de grafos semânticos combinada com CNNs seja uma abordagem promissora para o tratamento dos desafios impostos pelas características singulares das perguntas de consumidores. / The increasing availability of user-generated content in community Q&A sites has led to the advancement of Question Answering (QA) models that relies on reuse. Such approach can be implemented by the task of Answer Selection (AS), which consists in finding the best answer for a given question in a pre-selected pool candidate answers. Recently, good results have been achieved by AS models based on distributed word vectors and deep neural networks that are used to rank answers for a given question. Convolutinal Neural Networks (CNNs) are particularly succesful in this task. Most of the AS models are built over datasets that contains only short and objective questions expressed as interrogative sentences containing few words. Complex text structures are rarely considered. However, consumer questions may be really complex. This kind of question is the main form of seeking information in community Q&A sites, forums and customer services. Consumer questions have characteristics that increase the difficulty of the answer selection task. In general, they are composed of multiple interrelated sentences that are usually subjective, and contains laymans terms and excess of details that may be not particulary relevant. In this work, we propose an answer selection model for consumer questions. Specifically the contributions of this work are: (i) a definition for the consumer questions research object; (ii) a new dataset of this kind of question, which we call MilkQA; and (iii) an answer selection model, named SlimRank. MilkQA was created from an archive of questions and answers collected by the customer service of a well-known public agricultural research institution (Embrapa). It contains 2.6 thousand question-answer pairs selected and anonymized by human annotators guided by the definition proposed in this work. The analysis of questions in MilkQA led to the development of SlimRank, which combines semantic textual graphs with CNN architectures. SlimRank was evaluated on MilkQA and compared to baselines and two state-of-the-art answer selection models. The results achieved by our model were much higher than the baselines and comparable to the state of the art, but with significant reduction of computational time. Our results suggest that combining semantic text graphs with convolutional neural networks are a promising approach for dealing with the challenges imposed by consumer questions unique characteristics.
5

SlimRank: um modelo de seleção de respostas para perguntas de consumidores / SlimRank: an answer selection model for consumer questions

Marcelo Criscuolo 16 November 2017 (has links)
A disponibilidade de conteúdo gerado por usuários em sites colaborativos de perguntas e respostas tem impulsionado o avanço de modelos de Question Answering (QA) baseados em reúso. Essa abordagem pode ser implementada por meio da tarefa de seleção de respostas (Answer Selection, AS), que consiste em encontrar a melhor resposta para uma dada pergunta em um conjunto pré-selecionado de respostas candidatas. Nos últimos anos, abordagens baseadas em vetores distribucionais e em redes neurais profundas, em particular em redes neurais convolutivas (CNNs), têm apresentado bons resultados na tarefa de AS. Contudo, a maioria dos modelos é avaliada sobre córpus de perguntas objetivas e bem formadas, contendo poucas palavras. Raramente estruturas textuais complexas são consideradas. Perguntas de consumidores, comuns em sites colaborativos, podem ser bastante complexas. Em geral, são representadas por múltiplas frases inter-relacionadas, que apresentam pouca objetividade, vocabulário leigo e, frequentemente, contêm informações em excesso. Essas características aumentam a dificuldade da tarefa de AS. Neste trabalho, propomos um modelo de seleção de respostas para perguntas de consumidores. São contribuições deste trabalho: (i) uma definição para o objeto de pesquisa perguntas de consumidores; (ii) um novo dataset desse tipo de pergunta, chamado MilkQA; e (iii) um modelo de seleção de respostas, chamado SlimRank. O MilkQA foi criado a partir de um arquivo de perguntas e respostas coletadas pelo serviço de atendimento de uma renomada instituição pública de pesquisa agropecuária (Embrapa). Anotadores guiados pela definição de perguntas de consumidores proposta neste trabalho selecionaram 2,6 mil pares de perguntas e respostas contidas nesse arquivo. A análise dessas perguntas levou ao desenvolvimento do modelo SlimRank, que combina representação de textos na forma de grafos semânticos com arquiteturas de CNNs. O SlimRank foi avaliado no dataset MilkQA e comparado com baselines e dois modelos do estado da arte. Os resultados alcançados pelo SlimRank foram bastante superiores aos resultados dos baselines, e compatíveis com resultados de modelos do estado da arte; porém, com uma significativa redução do tempo computacional. Acreditamos que a representação de textos na forma de grafos semânticos combinada com CNNs seja uma abordagem promissora para o tratamento dos desafios impostos pelas características singulares das perguntas de consumidores. / The increasing availability of user-generated content in community Q&A sites has led to the advancement of Question Answering (QA) models that relies on reuse. Such approach can be implemented by the task of Answer Selection (AS), which consists in finding the best answer for a given question in a pre-selected pool candidate answers. Recently, good results have been achieved by AS models based on distributed word vectors and deep neural networks that are used to rank answers for a given question. Convolutinal Neural Networks (CNNs) are particularly succesful in this task. Most of the AS models are built over datasets that contains only short and objective questions expressed as interrogative sentences containing few words. Complex text structures are rarely considered. However, consumer questions may be really complex. This kind of question is the main form of seeking information in community Q&A sites, forums and customer services. Consumer questions have characteristics that increase the difficulty of the answer selection task. In general, they are composed of multiple interrelated sentences that are usually subjective, and contains laymans terms and excess of details that may be not particulary relevant. In this work, we propose an answer selection model for consumer questions. Specifically the contributions of this work are: (i) a definition for the consumer questions research object; (ii) a new dataset of this kind of question, which we call MilkQA; and (iii) an answer selection model, named SlimRank. MilkQA was created from an archive of questions and answers collected by the customer service of a well-known public agricultural research institution (Embrapa). It contains 2.6 thousand question-answer pairs selected and anonymized by human annotators guided by the definition proposed in this work. The analysis of questions in MilkQA led to the development of SlimRank, which combines semantic textual graphs with CNN architectures. SlimRank was evaluated on MilkQA and compared to baselines and two state-of-the-art answer selection models. The results achieved by our model were much higher than the baselines and comparable to the state of the art, but with significant reduction of computational time. Our results suggest that combining semantic text graphs with convolutional neural networks are a promising approach for dealing with the challenges imposed by consumer questions unique characteristics.
6

Automatisk dataextrahering och kategorisering av kvitton / Automatic data extraction and categorisation of receipts

Larsson, Christoffer, Wångenberg Olsson, Adam January 2019 (has links)
Anställda på företag gör ibland utlägg på köp åt företaget som de behöver dokumentera manuellt. För att underlätta dokumentation av utlägg hos anställda på företaget Consid AB har detta arbete haft i syfte att utveckla en tjänst som från en bild på ett kvitto kan extrahera relevant data såsom pris, datum, företagsnamn samt kategorisera kvittot. Resultatet som arbetet har medfört är en tjänst som kan extrahera text från kvitton med en säkerhet på i snitt 73 % på att texten är rätt. Efter tester kan det även fastställas att tjänsten kan hitta pris, datum och företagsnamn från ca. 64 % av testade kvitton med olika kvalité och innehåll. Tjänsten som byggdes har även implementerat två olika kategoriseringsmetoder där hälften av de testade kvittona kan kategoriseras av de båda metoderna. Efter analyser av metoder och resultat har slutsatser kunnat dragits i att tjänsten innehåller ett flertal brister samt att mer tid bör läggas för att optimera och testa tjänsten ytterligare. / Employees at companies sometimes make purchases on behalf of the company which they manually need to document. To ease the documentation of purchases made by employees at Consid AB, this study has had the goal to develop a service that from an image of a receipt can extract relevant data such as price, date, company name along with a category of the purchase. The resulting service can extract text from receipts with a confidence of 73 % in that the text is correct. Tests of the service shows that it can find price, date and company name on around 64 % of test receipts with different quality and contents. The resulting service has also implemented two different methods for categorisation where half of the test receipts could be categorised by both methods. After analysing methods and results, conclusions have been made in that the service contains of numerous flaws and that more time needs to be put in to optimise and test the service.
7

Evaluating Statistical MachineLearning and Deep Learning Algorithms for Anomaly Detection in Chat Messages / Utvärdering av statistiska maskininlärnings- och djupinlärningsalgoritmer för anomalitetsdetektering i chattmeddelanden

Freberg, Daniel January 2018 (has links)
Automatically detecting anomalies in text is of great interest for surveillance entities as vast amounts of data can be analysed to find suspicious activity. In this thesis, three distinct machine learning algorithms are evaluated as a chat message classifier is being implemented for the purpose of market surveillance. Naive Bayes and Support Vector Machine belong to the statistical class of machine learning algorithms being evaluated in this thesis and both require feature selection, a side objective of the thesis is thus to find a suitable feature selection technique to ensure mentioned algorithms achieve high performance. Long Short-Term Memory network is the deep learning algorithm being evaluated in the thesis, rather than depend on feature selection, the deep neural network will be evaluated as it is trained using word embeddings. Each of the algorithms achieved high performance but the findings ofthe thesis suggest Naive Bayes algorithm in conjunction with a feature counting feature selection technique is the most suitable choice for this particular learning problem. / Att automatiskt kunna upptäcka anomalier i text har stora implikationer för företag och myndigheter som övervakar olika sorters kommunikation. I detta examensarbete utvärderas tre olika maskininlärningsalgoritmer för chattmeddelandeklassifikation i ett marknadsövervakningsystem. Naive Bayes och Support Vector Machine tillhör båda den statistiska klassen av maskininlärningsalgoritmer som utvärderas i studien och bådar kräver selektion av vilka särdrag i texten som ska användas i algoritmen. Ett sekundärt mål med studien är således att hitta en passande selektionsteknik för att de statistiska algoritmerna ska prestera så bra som möjligt. Long Short-Term Memory Network är djupinlärningsalgoritmen som utvärderas i studien. Istället för att använda en selektionsteknik kommer djupinlärningsalgoritmen nyttja ordvektorer för att representera text. Resultaten visar att alla utvärderade algoritmer kan nå hög prestanda för ändamålet, i synnerhet Naive Bayes tillsammans med termfrekvensselektion.
8

Word2vec2syn : Synonymidentifiering med Word2vec / Word2vec2syn : Synonym Identification using Word2vec

Pettersson, Tove January 2019 (has links)
Inom NLP (eng. natural language processing) är synonymidentifiering en av de språkvetenskapliga utmaningarna som många antar. Fodina Language Technology AB är ett företag som skapat ett verktyg, Termograph, ämnad att samla termer inom företag och hålla den interna språkanvändningen konsekvent. En metodkombination bestående av språkteknologiska strategier utgör synonymidentifieringen och Fodina önskar ett större täckningsområde samt mer dynamik i framtagningsprocessen. Därav syftade detta arbete till att ta fram en ny metod, utöver metodkombinationen, för just synonymidentifiering. En färdigtränad Word2vec-modell användes och den inbyggda funktionen för cosinuslikheten användes för att få fram synonymer och skapa kluster. Modellen validerades, testades och utvärderades i förhållande till metodkombinationen. Valideringen visade att modellen skattade inom ett rimligt mänskligt spann i genomsnitt 60,30 % av gångerna och Spearmans korrelation visade på en signifikant stark korrelation. Testningen visade att 32 % av de bearbetade klustren innehöll matchande synonymförslag. Utvärderingen visade att i de fall som förslagen inte matchade så var modellens synonymförslag korrekta i 5,73 % av fallen jämfört med 3,07 % för metodkombinationen. Den interna reliabiliteten för utvärderarna visade på en befintlig men svag enighet, Fleiss Kappa = 0,19, CI(0,06, 0,33). Trots viss osäkerhet i resultaten påvisas ändå möjligheter för vidare användning av word2vec-modeller inom Fodinas synonymidentifiering. / One of the main challenges in the field of natural language processing (NLP) is synonym identification. Fodina Language Technology AB is the company behind the tool, Termograph, that aims to collect terms and provide a consistent language within companies. A combination of multiple methods from the field of language technology constitutes the synonym identification and Fodina would like to improve the area of coverage and increase the dynamics of the working process. The focus of this thesis was therefore to evaluate a new method for synonym identification beyond the already used combination. Initially a trained Word2vec model was used and for the synonym identification the built-in-function for cosine similarity was applied in order to create clusters. The model was validated, tested and evaluated relative to the combination. The validation implicated that the model made estimations within a fair human-based range in an average of 60.30% and Spearmans correlation indicated a strong significant correlation. The testing showed that 32% of the processed synonym clusters contained matching synonym suggestions. The evaluation showed that the synonym suggestions from the model was correct in 5.73% of all cases compared to 3.07% for the combination in the cases where the clusters did not match. The interrater reliability indicated a slight agreement, Fleiss’ Kappa = 0.19, CI(0.06, 0.33). Despite uncertainty in the results, opportunities for further use of Word2vec-models within Fodina’s synonym identification are nevertheless demonstrated.
9

Word Representations and Machine Learning Models for Implicit Sense Classification in Shallow Discourse Parsing

Callin, Jimmy January 2017 (has links)
CoNLL 2015 featured a shared task on shallow discourse parsing. In 2016, the efforts continued with an increasing focus on sense classification. In the case of implicit sense classification, there was an interesting mix of traditional and modern machine learning classifiers using word representation models. In this thesis, we explore the performance of a number of these models, and investigate how they perform using a variety of word representation models. We show that there are large performance differences between word representation models for certain machine learning classifiers, while others are more robust to the choice of word representation model. We also show that with the right choice of word representation model, simple and traditional machine learning classifiers can reach competitive scores even when compared with modern neural network approaches.

Page generated in 0.0615 seconds