41 |
A study of the use of natural language processing for conversational agentsWilkens, Rodrigo Souza January 2016 (has links)
linguagem é uma marca da humanidade e da consciência, sendo a conversação (ou diálogo) uma das maneiras de comunicacão mais fundamentais que aprendemos quando crianças. Por isso uma forma de fazer um computador mais atrativo para interação com usuários é usando linguagem natural. Dos sistemas com algum grau de capacidade de linguagem desenvolvidos, o chatterbot Eliza é, provavelmente, o primeiro sistema com foco em diálogo. Com o objetivo de tornar a interação mais interessante e útil para o usuário há outras aplicações alem de chatterbots, como agentes conversacionais. Estes agentes geralmente possuem, em algum grau, propriedades como: corpo (com estados cognitivos, incluindo crenças, desejos e intenções ou objetivos); incorporação interativa no mundo real ou virtual (incluindo percepções de eventos, comunicação, habilidade de manipular o mundo e comunicar com outros agentes); e comportamento similar ao humano (incluindo habilidades afetivas). Este tipo de agente tem sido chamado de diversos nomes como agentes animados ou agentes conversacionais incorporados. Um sistema de diálogo possui seis componentes básicos. (1) O componente de reconhecimento de fala que é responsável por traduzir a fala do usuário em texto. (2) O componente de entendimento de linguagem natural que produz uma representação semântica adequada para diálogos, normalmente utilizando gramáticas e ontologias. (3) O gerenciador de tarefa que escolhe os conceitos a serem expressos ao usuário. (4) O componente de geração de linguagem natural que define como expressar estes conceitos em palavras. (5) O gerenciador de diálogo controla a estrutura do diálogo. (6) O sintetizador de voz é responsável por traduzir a resposta do agente em fala. No entanto, não há consenso sobre os recursos necessários para desenvolver agentes conversacionais e a dificuldade envolvida nisso (especialmente em línguas com poucos recursos disponíveis). Este trabalho foca na influência dos componentes de linguagem natural (entendimento e gerência de diálogo) e analisa em especial o uso de sistemas de análise sintática (parser) como parte do desenvolvimento de agentes conversacionais com habilidades de linguagem mais flexível. Este trabalho analisa quais os recursos do analisador sintático contribuem para agentes conversacionais e aborda como os desenvolver, tendo como língua alvo o português (uma língua com poucos recursos disponíveis). Para isto, analisamos as abordagens de entendimento de linguagem natural e identificamos as abordagens de análise sintática que oferecem um bom desempenho. Baseados nesta análise, desenvolvemos um protótipo para avaliar o impacto do uso de analisador sintático em um agente conversacional. / Language is a mark of humanity and conscience, with the conversation (or dialogue) as one of the most fundamental manners of communication that we learn as children. Therefore one way to make a computer more attractive for interaction with users is through the use of natural language. Among the systems with some degree of language capabilities developed, the Eliza chatterbot is probably the first with a focus on dialogue. In order to make the interaction more interesting and useful to the user there are other approaches besides chatterbots, like conversational agents. These agents generally have, to some degree, properties like: a body (with cognitive states, including beliefs, desires and intentions or objectives); an interactive incorporation in the real or virtual world (including perception of events, communication, ability to manipulate the world and communicate with others); and behavior similar to a human (including affective abilities). This type of agents has been called by several terms, including animated agents or embedded conversational agents (ECA). A dialogue system has six basic components. (1) The speech recognition component is responsible for translating the user’s speech into text. (2) The Natural Language Understanding component produces a semantic representation suitable for dialogues, usually using grammars and ontologies. (3) The Task Manager chooses the concepts to be expressed to the user. (4) The Natural Language Generation component defines how to express these concepts in words. (5) The dialog manager controls the structure of the dialogue. (6) The synthesizer is responsible for translating the agents answer into speech. However, there is no consensus about the necessary resources for developing conversational agents and the difficulties involved (especially in resource-poor languages). This work focuses on the influence of natural language components (dialogue understander and manager) and analyses, in particular the use of parsing systems as part of developing conversational agents with more flexible language capabilities. This work analyses what kind of parsing resources contributes to conversational agents and discusses how to develop them targeting Portuguese, which is a resource-poor language. To do so we analyze approaches to the understanding of natural language, and identify parsing approaches that offer good performance, based on which we develop a prototype to evaluate the impact of using a parser in a conversational agent.
|
42 |
A study of the use of natural language processing for conversational agentsWilkens, Rodrigo Souza January 2016 (has links)
linguagem é uma marca da humanidade e da consciência, sendo a conversação (ou diálogo) uma das maneiras de comunicacão mais fundamentais que aprendemos quando crianças. Por isso uma forma de fazer um computador mais atrativo para interação com usuários é usando linguagem natural. Dos sistemas com algum grau de capacidade de linguagem desenvolvidos, o chatterbot Eliza é, provavelmente, o primeiro sistema com foco em diálogo. Com o objetivo de tornar a interação mais interessante e útil para o usuário há outras aplicações alem de chatterbots, como agentes conversacionais. Estes agentes geralmente possuem, em algum grau, propriedades como: corpo (com estados cognitivos, incluindo crenças, desejos e intenções ou objetivos); incorporação interativa no mundo real ou virtual (incluindo percepções de eventos, comunicação, habilidade de manipular o mundo e comunicar com outros agentes); e comportamento similar ao humano (incluindo habilidades afetivas). Este tipo de agente tem sido chamado de diversos nomes como agentes animados ou agentes conversacionais incorporados. Um sistema de diálogo possui seis componentes básicos. (1) O componente de reconhecimento de fala que é responsável por traduzir a fala do usuário em texto. (2) O componente de entendimento de linguagem natural que produz uma representação semântica adequada para diálogos, normalmente utilizando gramáticas e ontologias. (3) O gerenciador de tarefa que escolhe os conceitos a serem expressos ao usuário. (4) O componente de geração de linguagem natural que define como expressar estes conceitos em palavras. (5) O gerenciador de diálogo controla a estrutura do diálogo. (6) O sintetizador de voz é responsável por traduzir a resposta do agente em fala. No entanto, não há consenso sobre os recursos necessários para desenvolver agentes conversacionais e a dificuldade envolvida nisso (especialmente em línguas com poucos recursos disponíveis). Este trabalho foca na influência dos componentes de linguagem natural (entendimento e gerência de diálogo) e analisa em especial o uso de sistemas de análise sintática (parser) como parte do desenvolvimento de agentes conversacionais com habilidades de linguagem mais flexível. Este trabalho analisa quais os recursos do analisador sintático contribuem para agentes conversacionais e aborda como os desenvolver, tendo como língua alvo o português (uma língua com poucos recursos disponíveis). Para isto, analisamos as abordagens de entendimento de linguagem natural e identificamos as abordagens de análise sintática que oferecem um bom desempenho. Baseados nesta análise, desenvolvemos um protótipo para avaliar o impacto do uso de analisador sintático em um agente conversacional. / Language is a mark of humanity and conscience, with the conversation (or dialogue) as one of the most fundamental manners of communication that we learn as children. Therefore one way to make a computer more attractive for interaction with users is through the use of natural language. Among the systems with some degree of language capabilities developed, the Eliza chatterbot is probably the first with a focus on dialogue. In order to make the interaction more interesting and useful to the user there are other approaches besides chatterbots, like conversational agents. These agents generally have, to some degree, properties like: a body (with cognitive states, including beliefs, desires and intentions or objectives); an interactive incorporation in the real or virtual world (including perception of events, communication, ability to manipulate the world and communicate with others); and behavior similar to a human (including affective abilities). This type of agents has been called by several terms, including animated agents or embedded conversational agents (ECA). A dialogue system has six basic components. (1) The speech recognition component is responsible for translating the user’s speech into text. (2) The Natural Language Understanding component produces a semantic representation suitable for dialogues, usually using grammars and ontologies. (3) The Task Manager chooses the concepts to be expressed to the user. (4) The Natural Language Generation component defines how to express these concepts in words. (5) The dialog manager controls the structure of the dialogue. (6) The synthesizer is responsible for translating the agents answer into speech. However, there is no consensus about the necessary resources for developing conversational agents and the difficulties involved (especially in resource-poor languages). This work focuses on the influence of natural language components (dialogue understander and manager) and analyses, in particular the use of parsing systems as part of developing conversational agents with more flexible language capabilities. This work analyses what kind of parsing resources contributes to conversational agents and discusses how to develop them targeting Portuguese, which is a resource-poor language. To do so we analyze approaches to the understanding of natural language, and identify parsing approaches that offer good performance, based on which we develop a prototype to evaluate the impact of using a parser in a conversational agent.
|
43 |
Approche stochastique bayésienne de la composition sémantique pour les modules de compréhension automatique de la parole dans les systèmes de dialogue homme-machine / A Bayesian Approach of Semantic Composition for Spoken Language Understanding Modules in Spoken Dialog SystemsMeurs, Marie-Jean 10 December 2009 (has links)
Les systèmes de dialogue homme-machine ont pour objectif de permettre un échange oral efficace et convivial entre un utilisateur humain et un ordinateur. Leurs domaines d'applications sont variés, depuis la gestion d'échanges commerciaux jusqu'au tutorat ou l'aide à la personne. Cependant, les capacités de communication de ces systèmes sont actuellement limités par leur aptitude à comprendre la parole spontanée. Nos travaux s'intéressent au module de compréhension de la parole et présentent une proposition entièrement basée sur des approches stochastiques, permettant l'élaboration d'une hypothèse sémantique complète. Notre démarche s'appuie sur une représentation hiérarchisée du sens d'une phrase à base de frames sémantiques. La première partie du travail a consisté en l'élaboration d'une base de connaissances sémantiques adaptée au domaine du corpus d'expérimentation MEDIA (information touristique et réservation d'hôtel). Nous avons eu recours au formalisme FrameNet pour assurer une généricité maximale à notre représentation sémantique. Le développement d'un système à base de règles et d'inférences logiques nous a ensuite permis d'annoter automatiquement le corpus. La seconde partie concerne l'étude du module de composition sémantique lui-même. En nous appuyant sur une première étape d'interprétation littérale produisant des unités conceptuelles de base (non reliées), nous proposons de générer des fragments sémantiques (sous-arbres) à l'aide de réseaux bayésiens dynamiques. Les fragments sémantiques générés fournissent une représentation sémantique partielle du message de l'utilisateur. Pour parvenir à la représentation sémantique globale complète, nous proposons et évaluons un algorithme de composition d'arbres décliné selon deux variantes. La première est basée sur une heuristique visant à construire un arbre de taille et de poids minimum. La seconde s'appuie sur une méthode de classification à base de séparateurs à vaste marge pour décider des opérations de composition à réaliser. Le module de compréhension construit au cours de ce travail peut être adapté au traitement de tout type de dialogue. Il repose sur une représentation sémantique riche et les modèles utilisés permettent de fournir des listes d'hypothèses sémantiques scorées. Les résultats obtenus sur les données expérimentales confirment la robustesse de l'approche proposée aux données incertaines et son aptitude à produire une représentation sémantique consistante / Spoken dialog systems enable users to interact with computer systems via natural dialogs, as they would with human beings. These systems are deployed into a wide range of application fields from commercial services to tutorial or information services. However, the communication skills of such systems are bounded by their spoken language understanding abilities. Our work focus on the spoken language understanding module which links the automatic speech recognition module and the dialog manager. From the user’s utterance analysis, the spoken language understanding module derives a representation of its semantic content upon which the dialog manager can decide the next best action to perform. The system we propose introduces a stochastic approach based on Dynamic Bayesian Networks (DBNs) for spoken language understanding. DBN-based models allow to infer and then to compose semantic frame-based tree structures from speech transcriptions. First, we developed a semantic knowledge source covering the domain of our experimental corpus (MEDIA, a French corpus for tourism information and hotel booking). The semantic frames were designed according to the FrameNet paradigm and a hand-craft rule-based approach was used to derive the seed annotated training data.Then, to derive automatically the frame meaning representations, we propose a system based on a two decoding step process using DBNs : first basic concepts are derived from the user’s utterance transcriptions, then inferences are made on sequential semantic frame structures, considering all the available previous annotation levels. The inference process extracts all possible sub-trees according to lower level information and composes the hypothesized branches into a single utterance-span tree. The composition step investigates two different algorithms : a heuristic minimizing the size and the weight of the tree ; a context-sensitive decision process based on support vector machines for detecting the relations between the hypothesized frames. This work investigates a stochastic process for generating and composing semantic frames using DBNs. The proposed approach offers a convenient way to automatically derive semantic annotations of speech utterances based on a complete frame hierarchical structure. Experimental results, obtained on the MEDIA dialog corpus, show that the system is able to supply the dialog manager with a rich and thorough representation of the user’s request semantics
|
44 |
Construction et stratégie d’exploitation des réseaux de confusion en lien avec le contexte applicatif de la compréhension de la parole / Confusion networks : construction algorithms and Spoken Language Understanding decision strategies in real applicationsMinescu, Bogdan 11 December 2008 (has links)
Cette thèse s’intéresse aux réseaux de confusion comme représentation compacte et structurée des hypothèses multiples produites par un moteur de reconnaissance de parole et transmises à un module de post-traitement applicatif. Les réseaux de confusion (CN pour Confusion Networks) sont générés à partir des graphes de mots et structurent l’information sous la forme d’une séquence de classes contenant des hypothèses de mots en concurrence. Le cas d’usage étudié dans ces travaux est celui des hypothèses de reconnaissance transmises à un module de compréhension de la parole dans le cadre d’une application de dialogue déployée par France Telecom. Deux problématiques inhérentes à ce contexte applicatif sont soulevées. De façon générale, un système de dialogue doit non seulement reconnaître un énoncé prononcé par un utilisateur, mais aussi l’interpréter afin de déduire sons sens. Du point de vue de l’utilisateur, les performances perçues sont plus proches de celles de la chaîne complète de compréhension que de celles de la reconnaissance vocale seule. Ce sont ces performances que nous cherchons à optimiser. Le cas plus particulier d’une application déployée implique de pouvoir traiter des données réelles et donc très variées. Un énoncé peut être plus ou moins bruité, dans le domaine ou hors-domaine, couvert par le modèle sémantique de l’application ou non, etc. Étant donnée cette grande variabilité, nous posons la question de savoir si le fait d’appliquer les mêmes traitements sur l’ensemble des données, comme c’est le cas dans les approches classiques, est une solution adaptée. Avec cette double perspective, cette thèse s’attache à la fois à enrichir l’algorithme de construction des CNs dans le but d’optimiser globalement le processus de compréhension et à proposer une stratégie adéquate d’utilisation des réseaux de confusion dans le contexte d’une application réelle. Après une analyse des propriétés de deux approches de construction des CNs sur un corpus de données réelles, l’algorithme retenu est celui du "pivot". Nous en proposons une version modifiée et adaptée au contexte applicatif en introduisant notamment un traitement différencié des mots du graphe qui privilégie les mots porteurs de sens. En réponse à la grande variabilité des énoncés à traiter dans une application déployée, nous proposons une stratégie de décision à plusieurs niveaux qui vise à mieux prendre en compte les spécificités des différents types d’énoncés. Nous montrons notamment qu’il est préférable de n’exploiter la richesse des sorties multiples que sur les énoncés réellement porteurs de sens. Cette stratégie permet à la fois d’optimiser les temps de calcul et d’améliorer globalement les performances du système / The work presented in this PhD deals with the confusion networks as a compact and structured representation of multiple aligned recognition hypotheses produced by a speech recognition system and used by different applications. The confusion networks (CN) are constructed from word graphs and structure information as a sequence of classes containing several competing word hypothesis. In this work we focus on the problem of robust understanding from spontaneous speech input in a dialogue application, using CNs as structured representation of recognition hypotheses for the spoken language understanding module. We use France Telecom spoken dialogue system for customer care. Two issues inherent to this context are tackled. A dialogue system does not only have to recognize what a user says but also to understand the meaning of his request and to act upon it. From the user’s point of view, system performance is more accurately represented by the performance of the understanding process than by speech recognition performance only. Our work aims at improving the performance of the understanding process. Using a real application implies being able to process real heterogeneous data. An utterance can be more or less noisy, in the domain or out of the domain of the application, covered or not by the semantic model of the application, etc. A question raised by the variability of the data is whether applying the same processes to the entire data set, as done in classical approaches, is a suitable solution. This work follows a double perspective : to improve the CN construction algorithm with the intention of optimizing the understanding process and to propose an adequate strategy for the use of CN in a real application. Following a detailed analysis of two CN construction algorithms on a test set collected using the France Telecom customer care service, we decided to use the "pivot" algorithm for our work. We present a modified and adapted version of this algorithm. The new algorithm introduces different processing techniques for the words which are important for the understanding process. As for the variability of the real data the application has to process, we present a new multiple level decision strategy aiming at applying different processing techniques for different utterance categories. We show that it is preferable to process multiple recognition hypotheses only on utterances having a valid interpretation. This strategy optimises computation time and yields better global performance
|
45 |
Leveraging Sequential Nature of Conversations for Intent ClassificationGotteti, Shree January 2021 (has links)
No description available.
|
46 |
Undervisning, varför ska vi ha det? : En studie kring undervisning och språkundervisning i förskolanHaquinius, Catarina January 2021 (has links)
Teaching, Why should we do it? This study deals with teaching in preschool. Is it really something that the preschool should be doing or do we risk a ‘schoolification’ of preschool? The survey also concerns language and language teaching in preschool. How do educators and principals view language? What does it mean for them and how do they view their language skills and language teaching skills to children. The survey investigates how the preschool works with teaching and how an educator works with the language to support the child in their language development. This survey also addresses how different preschools evaluate their activities and discusses how this affects the work with egalitarianism in preschool framed in the governing document curriculum for preschool 2018.
|
47 |
Granskning av examensarbetesrapporter med IBM Watson molntjänsterEriksson, Patrik, Wester, Philip January 2018 (has links)
Cloud services are one of the fast expanding fields of today. Companies such as Amazon, Google, Microsoft and IBM offer these cloud services in various forms. As this field progresses, the natural question occurs ”What can you do with the technology today?”. The technology offers scalability for hardware usage and user demands, that is attractive to developers and companies. This thesis tries to examine the applicability of cloud services, by combining it with the question: ”Is it possible to make an automated thesis examiner?” By narrowing down the services to IBM Watson web services, this thesis main question reads ”Is it possible to make an automated thesis examiner using IBM Watson?”. Hence the goal of this thesis was to create an automated thesis examiner. The project used a modified version of Bunge’s technological research method. Where amongst the first steps, a definition of an software thesis examiner for student theses was created. Then an empirical study of the Watson services, that seemed relevant from the literature study, proceeded. These empirical studies allowed a deeper understanding about the services’ practices and boundaries. From these implications and the definition of a software thesis examiner for student theses, an idea of how to build and implement an automated thesis examiner was created. Most of IBM Watson’s services were thoroughly evaluated, except for the service Machine Learning, that should have been studied further if the time resources would not have been depleted. This project found the Watson web services useful in many cases but did not find a service that was well suited for thesis examination. Although the goal was not reached, this thesis researched the Watson web services and can be used to improve understanding of its applicability, and for future implementations that face the provided definition. / Molntjänster är ett av de områden som utvecklas snabbast idag. Företag såsom Amazon, Google, Microsoft och IBM tillhandahåller dessa tjänster i flera former. Allteftersom utvecklingen tar fart, uppstår den naturliga frågan ”Vad kan man göra med den här tekniken idag?”. Tekniken erbjuder en skalbarhet mot använd hårdvara och antalet användare, som är attraktiv för utvecklare och företag. Det här examensarbetet försöker svara på hur molntjänster kan användas genom att kombinera det med frågan ”Är det möjligt att skapa en automatiserad examensarbetesrapportsgranskare?”. Genom att avgränsa undersökningen till IBM Watson molntjänster försöker arbetet huvudsakligen svara på huvudfrågan ”Är det möjligt att skapa en automatiserad examensarbetesrapportsgranskare med Watson molntjänster?”. Därmed var målet med arbetet att skapa en automatiserad examensarbetesrapportsgranskare. Projektet följde en modifierad version av Bunge’s teknologiska undersökningsmetod, där det första steget var att skapa en definition för en mjukvaruexamensarbetesrapportsgranskare följt av en utredning av de Watson molntjänster som ansågs relevanta från litteratur studien. Dessa undersöktes sedan vidare i empirisk studie. Genom de empiriska studierna skapades förståelse för tjänsternas tillämpligheter och begränsningar, för att kunna kartlägga hur de kan användas i en automatiserad examensarbetsrapportsgranskare. De flesta tjänster behandlades grundligt, förutom Machine Learning, som skulle behövt vidare undersökning om inte tidsresurserna tog slut. Projektet visar på att Watson molntjänster är användbara men inte perfekt anpassade för att granska examensarbetesrapporter. Även om inte målet uppnåddes, undersöktes Watson molntjänster, vilket kan ge förståelse för deras användbarhet och framtida implementationer för att möta den skapade definitionen.
|
48 |
Prototyputveckling för skalbar motor med förståelse för naturligt språk / Prototype development for a scalable engine with natural language understandingGaldo, Carlos, Chavez, Teddy January 2018 (has links)
Förståelse för naturligt språk, språk som har utvecklats av människan ex. talspråk eller teckenspråk, är en del av språkteknik. Det är ett brett ämnesområde där utvecklingen har gått fram i snabb takt senaste 20 åren. En bidragande faktor till denna utveckling är framgångarna med neurala nätverk som är en matematisk modell inspirerad av biologiska hjärnor. Förståelse för naturligt språk används inom många områden där det krävs att applikationer förstår innebörden av textinmatning. Exempel på applikationer som använder förståelse för naturligt språk är Google translate, Googles sökmotor och rättstavningsfunktionen i textredigerarprogram. A Great Thing AB har utvecklat applikationen Thing Launcher. Thing Launcher är en applikation som hanterar andra applikationer med hjälp av användarens olika kriterier i samband mobilens olika funktionaliteter som; väder, geografisk position, tid mm. Ett exempel kan vara att användaren vill att Spotify ska spela en specifik låt när användaren kommer hem, eller att en taxi ska vara på plats när användaren anländer till en geografisk position. I dagsläget styr man Thing Launcher med hjälp av textinmatningar. A Great Thing AB behöver hjälp att ta en prototyp på en motor med förståelse för naturligt språk som kan styras av både textinmatning och röstinmatning. Motorn ska användas i applikationen Thing Launcher. Med skalbarhet menas att motorn ska kunna utvecklas, att nya funktioner och applikationer ska kunna läggas till, samtidigt som systemet ska kunna vara i drift och att prestandan påverkas så lite som möjligt. Detta examensarbete har som syfte att undersöka vilka algoritmer som är lämpliga för att bygga en skalbar motor med förståelse av naturligt språk. Utifrån detta utveckla en prototyp. En litteraturstudie gjordes mellan dolda Markovmodeller och neurala nätverk. Resultatet visade att neurala nätverk var överlägset i förståelse av naturligt språk. Flera typer av neurala nätverk finns implementerade i TensorFlow och den är mycket flexibelt med sitt bredda utbud av kompatibla mobila enheter, vilket nyttar utvecklingen med det modulära aspekten och därför valdes detta som ramverk för att utveckla prototypen. De två viktigaste komponenterna i prototypen bestod av Command tagger, som ska kunna identifiera vilken applikation som användaren vill styra och NER tagger, som ska identifiera vad användaren vill att applikationen ska utföra. För att mäta träffsäkerheten utfördes det två tester, en för respektive tagger, flera gånger som mätte hur ofta komponenterna gissade rätt efter varje träningsrunda. Varje träningsrunda bestod av att komponenterna fick tiotusentals meningar som de fick gissa på följt av facit för att ge feedback. Med hjälp av feedback kunde komponenterna anpassas för hur de agerar i framtiden i samma situation. Command tagger gissade rätt 94 procent av gångerna och Ner tagger gissade rätt 96 procent av gångerna efter de sista träningsrundorna. I prototypen användes Androids inbyggda mjukvara för taligenkänning. Det är en funktion som omvandlar ljudvågor till text. En serverbaserad lösning med REST applikationsgränssnitt utvecklades för att göra motorn skalbar. Resultatet visar att fungerande prototyp som kan vidareutvecklas till en skalbar motor för naturligt språk. / Natural Language Understanding is a field that is part of Natural Language Processing. Big improvements have been made in the broad field of Natural Language Understanding during the past two decades. One big contribution to this is improvement is Neural Networks, a mathematical model inspired by biological brains. Natural Language Understanding is used in fields that require deeper understanding by applications. Google translate, Google search engine and grammar/spelling check are some examples of applications requiring deeper understanding. Thing Launcher is an application developed by A Great Thing AB. Thing Launcher is an application capable of managing other applications with different parameters. Some examples of parameters the user can use are geographic position and time. The user can as an example control what song will be played when you get home or order an Uber when you arrive to a certain destination. It is possible to control Thing Launcher today by text input. A Great Thing AB needs help developing a prototype capable of understanding text input and speech. The meaning of scalable is that it should be possible to develop, add functions and applications with as little impact as possible on up time and performance of the service. A comparison of suitable algorithms, tools and frameworks has been made in this thesis in order research what it takes to develop a scalable engine with the natural language understanding and then build a prototype from this gathered information. A theoretical comparison was made between Hidden Markov Models and Neural Networks. The results showed that Neural Networks are superior in the field of natural language understanding. The tests made in this thesis indicated that high accuracy could be achieved using neural networks. TensorFlow framework was chosen because it has many different types of neural network implemented in C/C++ ready to be used with Python and alsoand for the wide compatibility with mobile devices. The prototype should be able to identify voice commands. The prototype has two important components called Command tagger, which is going to identify which application the user wants to control and NER tagger, which is the going to identify what the user wants to do. To calculate the accuracy, two types of tests, one for each component, was executed several times to calculate how often the components guessed right after each training iteration. Each training iteration consisted of giving the components thousands of sentences to guess and giving them feedback by then letting them know the right answers. With the help of feedback, the components were molded to act right in situations like the training. The tests after the training process resulted with the Command tagger guessing right 94% of the time and the NER tagger guessing right 96% of the time. The built-in software in Android was used for speech recognition. This is a function that converts sound waves to text. A server-based solution with REST interface was developed to make the engine scalability. This thesis resulted with a working prototype that can be used to further developed into a scalable engine.
|
49 |
Bidirectional Encoder Representations from Transformers (BERT) for Question Answering in the Telecom Domain. : Adapting a BERT-like language model to the telecom domain using the ELECTRA pre-training approach / BERT för frågebesvaring inom telekomdomänen : Anpassning till telekomdomänen av en BERT-baserad språkmodell genom ELECTRA-förträningsmetodenHolm, Henrik January 2021 (has links)
The Natural Language Processing (NLP) research area has seen notable advancements in recent years, one being the ELECTRA model which improves the sample efficiency of BERT pre-training by introducing a discriminative pre-training approach. Most publicly available language models are trained on general-domain datasets. Thus, research is lacking for niche domains with domain-specific vocabulary. In this paper, the process of adapting a BERT-like model to the telecom domain is investigated. For efficiency in training the model, the ELECTRA approach is selected. For measuring target- domain performance, the Question Answering (QA) downstream task within the telecom domain is used. Three domain adaption approaches are considered: (1) continued pre- training on telecom-domain text starting from a general-domain checkpoint, (2) pre-training on telecom-domain text from scratch, and (3) pre-training from scratch on a combination of general-domain and telecom-domain text. Findings indicate that approach 1 is both inexpensive and effective, as target- domain performance increases are seen already after small amounts of training, while generalizability is retained. Approach 2 shows the highest performance on the target-domain QA task by a wide margin, albeit at the expense of generalizability. Approach 3 combines the benefits of the former two by achieving good performance on QA both in the general domain and the telecom domain. At the same time, it allows for a tokenization vocabulary well-suited for both domains. In conclusion, the suitability of a given domain adaption approach is shown to depend on the available data and computational budget. Results highlight the clear benefits of domain adaption, even when the QA task is learned through behavioral fine-tuning on a general-domain QA dataset due to insufficient amounts of labeled target-domain data being available. / Dubbelriktade språkmodeller som BERT har på senare år nått stora framgångar inom språkteknologiområdet. Flertalet vidareutvecklingar av BERT har tagits fram, bland andra ELECTRA, vars nyskapande diskriminativa träningsprocess förkortar träningstiden. Majoriteten av forskningen inom området utförs på data från den allmänna domänen. Med andra ord finns det utrymme för kunskapsbildning inom domäner med områdesspecifikt språk. I detta arbete utforskas metoder för att anpassa en dubbelriktad språkmodell till telekomdomänen. För att säkerställa hög effektivitet i förträningsstadiet används ELECTRA-modellen. Uppnådd prestanda i måldomänen mäts med hjälp av ett frågebesvaringsdataset för telekom-området. Tre metoder för domänanpassning undersöks: (1) fortsatt förträning på text från telekom-området av en modell förtränad på den allmänna domänen; (2) förträning från grunden på telekom-text; samt (3) förträning från grunden på en kombination av text från telekom-området och den allmänna domänen. Experimenten visar att metod 1 är både kostnadseffektiv och fördelaktig ur ett prestanda-perspektiv. Redan efter kort fortsatt förträning kan tydliga förbättringar inom frågebesvaring inom måldomänen urskiljas, samtidigt som generaliserbarhet kvarhålls. Tillvägagångssätt 2 uppvisar högst prestanda inom måldomänen, om än med markant sämre förmåga att generalisera. Metod 3 kombinerar fördelarna från de tidigare två metoderna genom hög prestanda dels inom måldomänen, dels inom den allmänna domänen. Samtidigt tillåter metoden användandet av ett tokenizer-vokabulär väl anpassat för båda domäner. Sammanfattningsvis bestäms en domänanpassningsmetods lämplighet av den respektive situationen och datan som tillhandahålls, samt de tillgängliga beräkningsresurserna. Resultaten påvisar de tydliga vinningar som domänanpassning kan ge upphov till, även då frågebesvaringsuppgiften lärs genom träning på ett dataset hämtat ur den allmänna domänen på grund av otillräckliga mängder frågebesvaringsdata inom måldomänen.
|
50 |
Aportaciones al modelado conexionista de lenguaje y su aplicación al reconocimiento de secuencias y traducción automáticaZamora Martínez, Francisco Julián 07 December 2012 (has links)
El procesamiento del lenguaje natural es un área de aplicación de la inteligencia artificial, en
particular, del reconocimiento de formas que estudia, entre otras cosas, incorporar información
sintáctica (modelo de lenguaje) sobre cómo deben juntarse las palabras de una determinada lengua,
para así permitir a los sistemas de reconocimiento/traducción decidir cual es la mejor hipótesis �con
sentido común�. Es un área muy amplia, y este trabajo se centra únicamente en la parte relacionada
con el modelado de lenguaje y su aplicación a diversas tareas: reconocimiento de secuencias
mediante modelos ocultos de Markov y traducción automática estadística.
Concretamente, esta tesis tiene su foco central en los denominados modelos conexionistas de
lenguaje, esto es, modelos de lenguaje basados en redes neuronales. Los buenos resultados de estos
modelos en diversas áreas del procesamiento del lenguaje natural han motivado el desarrollo de este
estudio.
Debido a determinados problemas computacionales que adolecen los modelos conexionistas de
lenguaje, los sistemas que aparecen en la literatura se construyen en dos etapas totalmente
desacopladas. En la primera fase se encuentra, a través de un modelo de lenguaje estándar, un
conjunto de hipótesis factibles, asumiendo que dicho conjunto es representativo del espacio de
búsqueda en el cual se encuentra la mejor hipótesis. En segundo lugar, sobre dicho conjunto, se
aplica el modelo conexionista de lenguaje y se extrae la hipótesis con mejor puntuación. A este
procedimiento se le denomina �rescoring�.
Este escenario motiva los objetivos principales de esta tesis:
� Proponer alguna técnica que pueda reducir drásticamente dicho coste computacional
degradando lo mínimo posible la calidad de la solución encontrada.
� Estudiar el efecto que tiene la integración de los modelos conexionistas de lenguaje en el
proceso de búsqueda de las tareas propuestas.
� Proponer algunas modificaciones del modelo original que permitan mejorar su calidad / Zamora Martínez, FJ. (2012). Aportaciones al modelado conexionista de lenguaje y su aplicación al reconocimiento de secuencias y traducción automática [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/18066
|
Page generated in 0.1316 seconds