• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Förenklad textinmatning på mobila enheter med hjälp av kontextbaserad språktolkning / Simplified text input for mobile devices using context based language interpretation

Jensen, Anders January 2005 (has links)
<p>The number of text messages sent from mobile phones, has increased dramatically over the last few years. Along with that, we are witnessing a lot of new mobile portal services currently being developed. Many of these services rely on an ability to input text efficiently. The traditional phone keypad is ambiguous because each key encodes more than one letter. At present, the most common way to deal with this problem is using a stored dictionary to guess the intended input. </p><p>This thesis presents a new text entry strategy called Qtap. Instead of using a stored dictionary to guess the intended word, this method uses probabilities of letter sequences. New features that come with Qtap are the usage of the viterbi algorithm to decode input sequences and a non-alphabetic keypad. How the strategy and the keypad used by Qtap were developed, is described throughout the thesis. </p><p>Qtap is also compared to a dictionary-based method, t9, on a non-user level. The results show Qtap is performing well in many senses. The conclusion from this is that a further development of Qtap is motivated. </p><p>A discussion of various modifications and additions to the design, that may yield a performance improvement, is also included.</p>
2

Förenklad textinmatning på mobila enheter med hjälp av kontextbaserad språktolkning / Simplified text input for mobile devices using context based language interpretation

Jensen, Anders January 2005 (has links)
The number of text messages sent from mobile phones, has increased dramatically over the last few years. Along with that, we are witnessing a lot of new mobile portal services currently being developed. Many of these services rely on an ability to input text efficiently. The traditional phone keypad is ambiguous because each key encodes more than one letter. At present, the most common way to deal with this problem is using a stored dictionary to guess the intended input. This thesis presents a new text entry strategy called Qtap. Instead of using a stored dictionary to guess the intended word, this method uses probabilities of letter sequences. New features that come with Qtap are the usage of the viterbi algorithm to decode input sequences and a non-alphabetic keypad. How the strategy and the keypad used by Qtap were developed, is described throughout the thesis. Qtap is also compared to a dictionary-based method, t9, on a non-user level. The results show Qtap is performing well in many senses. The conclusion from this is that a further development of Qtap is motivated. A discussion of various modifications and additions to the design, that may yield a performance improvement, is also included.
3

Using Bidirectional Encoder Representations from Transformers for Conversational Machine Comprehension / Användning av BERT-språkmodell för konversationsförståelse

Gogoulou, Evangelina January 2019 (has links)
Bidirectional Encoder Representations from Transformers (BERT) is a recently proposed language representation model, designed to pre-train deep bidirectional representations, with the goal of extracting context-sensitive features from an input text [1]. One of the challenging problems in the field of Natural Language Processing is Conversational Machine Comprehension (CMC). Given a context passage, a conversational question and the conversational history, the system should predict the answer span of the question in the context passage. The main challenge in this task is how to effectively encode the conversational history into the prediction of the next answer. In this thesis work, we investigate the use of the BERT language model for the CMC task. We propose a new architecture, named BERT-CMC, using the BERT model as a base. This architecture includes a new module for encoding the conversational history, inspired by the Transformer-XL model [2]. This module serves the role of memory throughout the conversation. The proposed model is trained and evaluated on the Conversational Question Answering dataset (CoQA) [3]. Our hypothesis is that the BERT-CMC model will effectively learn the underlying context of the conversation, leading to better performance than the baseline model proposed for CoQA. Our results of evaluating the BERT-CMC on the CoQA dataset show that the model performs poorly (44.7% F1 score), comparing to the CoQA baseline model (66.2% F1 score). In the light of model explainability, we also perform a qualitative analysis of the model behavior in questions with various linguistic phenomena eg coreference, pragmatic reasoning. Additionally, we motivate the critical design choices made, by performing an ablation study of the effect of these choices on the model performance. The results suggest that fine tuning the BERT layers boost the model performance. Moreover, it is shown that increasing the number of extra layers on top of BERT leads to bigger capacity of the conversational memory. / Bidirectional Encoder Representations from Transformers (BERT) är en nyligen föreslagen språkrepresentationsmodell, utformad för att förträna djupa dubbelriktade representationer, med målet att extrahera kontextkänsliga särdrag från en inmatningstext [1]. Ett utmanande problem inom området naturligtspråkbehandling är konversationsförståelse (förkortat CMC). Givet en bakgrundstext, en fråga och konversationshistoriken ska systemet förutsäga vilken del av bakgrundstexten som utgör svaret på frågan. Den viktigaste utmaningen i denna uppgift är hur man effektivt kan kodifiera konversationshistoriken i förutsägelsen av nästa svar. I detta examensarbete undersöker vi användningen av BERT-språkmodellen för CMC-uppgiften. Vi föreslår en ny arkitektur med namnet BERT-CMC med BERT-modellen som bas. Denna arkitektur innehåller en ny modul för kodning av konversationshistoriken, inspirerad av Transformer-XL-modellen [2]. Den här modulen tjänar minnets roll under hela konversationen. Den föreslagna modellen tränas och utvärderas på en datamängd för samtalsfrågesvar (CoQA) [3]. Vår hypotes är att BERT-CMC-modellen effektivt kommer att lära sig det underliggande sammanhanget för konversationen, vilket leder till bättre resultat än basmodellen som har föreslagits för CoQA. Våra resultat av utvärdering av BERT-CMC på CoQA-datasetet visar att modellen fungerar dåligt (44.7% F1 resultat), jämfört med CoQAbasmodellen (66.2% F1 resultat). För att bättre kunna förklara modellen utför vi också en kvalitativ analys av modellbeteendet i frågor med olika språkliga fenomen, t.ex. koreferens, pragmatiska resonemang. Dessutom motiverar vi de kritiska designvalen som gjorts genom att utföra en ablationsstudie av effekten av dessa val på modellens prestanda. Resultaten tyder på att finjustering av BERT-lager ökar modellens prestanda. Dessutom visas att ökning av antalet extra lager ovanpå BERT leder till större konversationsminne.

Page generated in 0.0654 seconds