• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 313
  • 47
  • Tagged with
  • 360
  • 351
  • 321
  • 306
  • 303
  • 296
  • 296
  • 98
  • 87
  • 81
  • 78
  • 76
  • 73
  • 65
  • 58
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Computer support for learners of spoken English

Hincks, Rebecca January 2005 (has links)
This thesis concerns the use of speech technology to support the process of learning the English language. It applies theories of computer-assisted language learning and second language acquisition to address the needs of beginning, intermediate and advanced students of English for specific purposes. The thesis includes an evaluation of speech-recognition-based pronunciation software, based on a controlled study of a group of immigrant engineers. The study finds that while the weaker students may have benefited from their software practice, the pronun¬ciation ability of the better students did not improve. The linguistic needs of advanced and intermediate Swedish-native students of English are addressed in a study using multimodal speech synthesis in an interactive exercise demonstrating differences in the placement of lexical stress in two Swedish-English cognates. A speech database consisting of 28 ten-minute oral presentations made by these learners is described, and an analysis of pronunciation errors is pre¬sented. Eighteen of the presentations are further analyzed with regard to the normalized standard deviation of fundamental frequency over 10-second long samples of speech, termed pitch variation quotient (PVQ). The PVQ is found to range from 6% to 34% in samples of speech, with mean levels of PVQ per presentation ranging from 11% to 24%. Males are found to use more pitch variation than females. Females who are more proficient in English use more pitch variation than the less profi¬cient females. A perceptual experiment tests the relationship between PVQ and impressions of speaker liveliness. An overall correlation of .83 is found. Temporal variables in the presentation speech are also studied. A bilingual database where five speakers make the same presentation in both English and Swedish is studied to examine effects of using a second language on presentation prosody. Little intra-speaker difference in pitch variation is found, but these speakers speak on average 20% faster when using their native language. The thesis concludes with a discussion of how the results could be applied in a proposed feedback mechanism for practicing and assessing oral presentations, concept¬ualized as a ‘speech checker.’ Potential users of the system would include native as well as non-native speakers of English. / QC 20101021
2

Expressiveness in virtual talking faces

Svanfeldt, Gunilla January 2006 (has links)
<p>In this thesis, different aspects concerning how to make synthetic talking faces more expressive have been studied. How can we collect data for the studies, how is the lip articulation affected by expressive speech, can the recorded data be used interchangeably in different face models, can we use eye movements in the agent for communicative purposes? The work of this thesis includes studies of these questions and also an experiment using a talking head as a complement to a targeted audio device, in order to increase the intelligibility of the speech.</p><p>The data collection described in the first paper resulted in two multimodal speech corpora. In the following analysis of the recorded data it could be stated that expressive modes strongly affect the speech articulation, although further studies are needed in order to acquire more quantitative results and to cover more phonemes and expressions as well as to be able to generalise the results to more than one individual.</p><p>When switching the files containing facial animation parameters (FAPs) between different face models (as well as research sites), some problematic issues were encountered despite the fact that both face models were created according to the MPEG-4 standard. The evaluation test of the implemented emotional expressions showed that best recognition results were obtained when the face model and FAP-file originated from the same site.</p><p>The perception experiment where a synthetic talking head was combined with a targeted audio, parametric loudspeaker showed that the virtual face augmented the intelligibility of speech, especially when the sound beam was directed slightly to the side of the listener i. e. at lower sound intesities.</p><p>In the experiment with eye gaze in a virtual talking head, the possibility of achieving mutual gaze with the observer was assessed. The results indicated that it is possible, but also pointed at some design features in the face model that need to be altered in order to achieve a better control of the perceived gaze direction.</p>
3

Computer support for learners of spoken English

Hincks, Rebecca January 2005 (has links)
<p>This thesis concerns the use of speech technology to support the process of learning the English language. It applies theories of computer-assisted language learning and second language acquisition to address the needs of beginning, intermediate and advanced students of English for specific purposes.</p><p>The thesis includes an evaluation of speech-recognition-based pronunciation software, based on a controlled study of a group of immigrant engineers. The study finds that while the weaker students may have benefited from their software practice, the pronun¬ciation ability of the better students did not improve.</p><p>The linguistic needs of advanced and intermediate Swedish-native students of English are addressed in a study using multimodal speech synthesis in an interactive exercise demonstrating differences in the placement of lexical stress in two Swedish-English cognates. A speech database consisting of 28 ten-minute oral presentations made by these learners is described, and an analysis of pronunciation errors is pre¬sented. Eighteen of the presentations are further analyzed with regard to the normalized standard deviation of fundamental frequency over 10-second long samples of speech, termed pitch variation quotient (PVQ). The PVQ is found to range from 6% to 34% in samples of speech, with mean levels of PVQ per presentation ranging from 11% to 24%. Males are found to use more pitch variation than females. Females who are more proficient in English use more pitch variation than the less profi¬cient females. A perceptual experiment tests the relationship between PVQ and impressions of speaker liveliness. An overall correlation of .83 is found. Temporal variables in the presentation speech are also studied.</p><p>A bilingual database where five speakers make the same presentation in both English and Swedish is studied to examine effects of using a second language on presentation prosody. Little intra-speaker difference in pitch variation is found, but these speakers speak on average 20% faster when using their native language. The thesis concludes with a discussion of how the results could be applied in a proposed feedback mechanism for practicing and assessing oral presentations, concept¬ualized as a ‘speech checker.’ Potential users of the system would include native as well as non-native speakers of English.</p>
4

Clustering in Swedish : The Impact of some Properties of the Swedish Language on Document Clustering and an Evaluation Method

Rosell, Magnus January 2005 (has links)
<p>Text clustering divides a set of texts into groups, so that texts within each group are similar in content. It may be used to uncover the structure and content of unknown text sets as well as to give new perspectives on known ones. The contributions of this thesis are an investigation of text representation for Swedish and an evaluation method that uses two or more manual categorizations.</p><p>Text clustering, at least such as it is treated here, is performed using the vector space model, which is commonly used in information retrieval. This model represents texts by the words that appear in them and considers texts similar in content if they share many words. Languages differ in what is considered a word. We have investigated the impact of some of the characteristics of Swedish on text clustering. Since Swedish has more morphological variation than for instance English we have used a stemmer to strip suffixes. This gives moderate improvements and reduces the number of words in the representation.</p><p>Swedish has a rich production of solid compounds. Most of the constituents of these are used on their own as words and in several different compounds. In fact, Swedish solid compounds often correspond to phrases or open compounds in other languages.In the ordinary vector space model the constituents of compounds are not accounted for when calculating the similarity between texts. To use them we have employed a spell checking program to split compounds. The results clearly show that this is beneficial.</p><p>The vector space model does not regard word order. We have tried to extend it with nominal phrases in different ways. Noneof our experiments have shown any improvement over using the ordinary model.</p><p>Evaluation of text clustering results is very hard. What is a good partition of a text set is inherently subjective. Automatic evaluation methods are either intrinsic or extrinsic. Internal quality measures use the representation in some manner. Therefore they are not suitable for comparisons of different representations.</p><p>External quality measures compare a clustering with a (manual) categorization of the same text set. The theoretical best possible value for a measure is known, but it is not obvious what a good value is -- text sets differ in difficulty to cluster and categorizations are more or less adapted to a particular text set. We describe an evaluation method for cases where a text set has more than one categorization. In such cases the result of a clustering can be compared with the result for one of the categorizations, which we assume is a good partition. We also describe the kappa coefficient as a clustering quality measure in the same setting.</p> / <p>Textklustring delar upp en mängd texter i grupper, så att texterna inom dessa liknar varandra till innehåll. Man kan använda textklustring för att uppdaga strukturer och innehåll i okända textmängder och för att få nya perspektiv på redan kända. Bidragen i denna avhandling är en undersökning av textrepresentationer för svenska texter och en utvärderingsmetod som använder sig av två eller fler manuella kategoriseringar.</p><p>Textklustring, åtminstonde som det beskrivs här, utnyttjar sig av den vektorrumsmodell, som används allmänt inom området. I denna modell representeras texter med orden som förekommer i dem och texter som har många gemensamma ord betraktas som lika till innehåll. Vad som betraktas som ett ord skiljer sig mellan språk. Vi har undersökt inverkan av några av svenskans egenskaper på textklustring. Eftersom svenska har större morfologisk variation än till exempel engelska har vi tagit bort suffix med hjälp av en stemmer. Detta ger lite bättre resultat och minskar antalet ord i representationen.</p><p>I svenska används och skapas hela tiden fasta sammansättningar. De flesta delar av sammansättningar används som ord på egen hand och i många olika sammansättningar. Fasta sammansättningar i svenska språket motsvarar ofta fraser och öppna sammansättningar i andra språk. Delarna i sammansättningar används inte vid likhetsberäkningen i vektorrumsmodellen. För att utnyttja dem har vi använt ett rättstavningsprogram för att dela upp sammansättningar. Resultaten visar tydligt att detta är fördelaktigt</p><p>I vektorrumsmodellen tas ingen hänsyn till ordens inbördes ordning. Vi har försökt utvidga modellen med nominalfraser på olika sätt. Inga av våra experiment visar på någon förbättring jämfört med den vanliga enkla modellen.</p><p>Det är mycket svårt att utvärdera textklustringsresultat. Det ligger i sakens natur att vad som är en bra uppdelning av en mängd texter är subjektivt. Automatiska utvärderingsmetoder är antingen interna eller externa. Interna kvalitetsmått utnyttjar representationen på något sätt. Därför är de inte lämpliga att använda vid jämförelser av olika representationer.</p><p>Externa kvalitetsmått jämför en klustring med en (manuell) kategorisering av samma mängd texter. Det teoretiska bästa värdet för måtten är kända, men vad som är ett bra värde är inte uppenbart -- mängder av texter skiljer sig åt i svårighet att klustra och kategoriseringar är mer eller mindre lämpliga för en speciell mängd texter. Vi beskriver en utvärderingsmetod som kan användas då en mängd texter har mer än en kategorisering. I sådana fall kan resultatet för en klustring jämföras med resultatet för en av kategoriseringarna, som vi antar är en bra uppdelning. Vi beskriver också kappakoefficienten som ett kvalitetsmått för klustring under samma förutsättningar.</p>
5

Fördomsfulla associationer i en svenskvektorbaserad semantisk modell / Bias in a Swedish Word Embedding

Jonasson, Michael January 2019 (has links)
Semantiska vektormodeller är en kraftfull teknik där ords mening kan representeras av vektorervilka består av siffror. Vektorerna tillåter geometriska operationer vilka fångar semantiskt viktigaförhållanden mellan orden de representerar. I denna studie implementeras och appliceras WEAT-metoden för att undersöka om statistiska förhållanden mellan ord som kan uppfattas somfördomsfulla existerar i en svensk semantisk vektormodell av en svensk nyhetstidning. Resultatetpekar på att ordförhållanden i vektormodellen har förmågan att återspegla flera av de sedantidigare IAT-dokumenterade fördomar som undersöktes. I studien implementeras och applicerasockså WEFAT-metoden för att undersöka vektormodellens förmåga att representera två faktiskastatistiska samband i verkligheten, vilket görs framgångsrikt i båda undersökningarna. Resultatenav studien som helhet ger stöd till metoderna som används och belyser samtidigt problematik medatt använda semantiska vektormodeller i språkteknologiska applikationer. / Word embeddings are a powerful technique where word meaning can be represented by vectors containing actual numbers. The vectors allow  geometric operations that capture semantically important relationships between the words. In this study WEAT is applied in order to examine whether statistical properties of words pertaining to bias can be found in a swedish word embedding trained on a corpus from a swedish newspaper. The results shows that the word embedding can represent several of the IAT documented biases that where tested. A second method, WEFAT, is applied to the word embedding in order to explore the embeddings ability to represent actual statistical properties, which is also done successfully. The results from this study lends support to the validity of both methods aswell as illuminating the issue of problematic relationships between words in word embeddings.
6

Sentiment Analysis of Equity Analyst Research Reports using Convolutional Neural Networks

Olof, Löfving January 2019 (has links)
Natural language processing, a subfield of artificial intelligence and computer science, has recently been of great research interest due to the vast amount of information created on the internet in the modern era. One of the main natural language processing areas concerns sentiment analysis. This is a field that studies the polarity of human natural language and generally tries to categorize it as either positive, negative or neutral. In this thesis, sentiment analysis has been applied to research reports written by equity analysts. The objective has been to investigate if there exist a distinct distribution of the reports and if one is able to classify sentiment in these reports. The thesis consist of two parts; firstly investigating possibilities on how to divide the reports into different sentiment labelling regimes and secondly categorizing the sentiment using machine learning techniques. Logistic regression as well as several convolutional neural network structures has been used to classify the sentiment. Working with textual data requires the mapping of text to real valued values called features. Several feature extraction methods has been investigated including Bag of Words, term frequency-inverse document frequency and Word2vec. Out of the tested labelling regimes, classifying the documents using upgrades and downgrades of report recommendation shows the most promising potential. For this regime, the convolutional neural network architectures outperform logistic regression by a significant margin. Out of the networks tested, a double input channel utilizing two different Word2vec representations performs the best. The two different representations originate from different sources; one from the set of equity research reports and the other trained by the Google Brain team on an extensive Google news data set. This suggests that using one representation that represent topic specific words and one that is better at representing more common words enhances classification performance.
7

Automatic Error Detection and Correction in Neural Machine Translation : A comparative study of Swedish to English and Greek to English

Papadopoulou, Anthi January 2019 (has links)
Automatic detection and automatic correction of machine translation output are important steps to ensure an optimal quality of the final output. In this work, we compared the output of neural machine translation of two different language pairs, Swedish to English and Greek to English. This comparison was made using common machine translation metrics (BLEU, METEOR, TER) and syntax-related ones (POSBLEU, WPF, WER on POS classes). It was found that neither common metrics nor purely syntax-related ones were able to capture the quality of the machine translation output accurately, but the decomposition of WER over POS classes was the most informative one. A sample of each language was taken, so as to aid in the comparison between manual and automatic error categorization of five error categories, namely reordering errors, inflectional errors, missing and extra words, and incorrect lexical choices. Both Spearman’s ρ and Pearson’s r showed that there is a good correlation with human judgment with values above 0.9. Finally, based on the results of this error categorization, automatic post editing rules were implemented and applied, and their performance was checked against the sample, and the rest of the data set, showing varying results. The impact on the sample was greater, showing improvement in all metrics, while the impact on the rest of the data set was negative. An investigation of that, alongside the fact that correction was not possible for Greek due to extremely free reference translations and lack of error patterns in spoken speech, reinforced the belief that automatic post-editing is tightly connected to consistency in the reference translation, while also proving that in machine translation output handling, potentially more than one reference translations would be needed to ensure better results.
8

Transcription of Historical Encrypted Manuscripts : Evaluation of an automatic interactive transcription tool.

Johansson, Kajsa January 2019 (has links)
Countless of historical sources are saved in national libraries and archives all over the world and contain important information about our history. Some of these sources are encrypted to prevent people from reading it. This thesis examines a semi-automated Interactive transcription Tool based on unsupervised learning without any labelled training data that has been developed for transcription of encrypted sources and compares it to manual transcription. The interactive transcription tool is based on handwritten text recognition techniques and the system identifies cluster of symbols based on similarity measures. The tool is evaluated on ciphers with number sequences that have previously been transcribed manually to compare how well the transcription tool performs. The weaknesses of the tool are described and suggestions on how the tool can be improved are proposed. Transcription based on HTR techniques and clustering shows promising results and the unsupervised method based on clustering should be further investigated on ciphers with various symbol sets.
9

Error Handling in Spoken Dialogue Systems : Managing Uncertainty, Grounding and Miscommunication

Skantze, Gabriel January 2007 (has links)
Due to the large variability in the speech signal, the speech recognition process constitutes the major source of errors in most spoken dialogue systems. A spoken dialogue system can never know for certain what the user is saying, it can only make hypotheses. As a result of this uncertainty, two types of errors can be made: over-generation of hypotheses, which leads to misunderstanding, and under-generation, which leads to non-understanding. In human-human dialogue, speakers try to minimise such miscommunication by constantly sending and picking up signals about their understanding, a process commonly referred to as grounding. The topic of this thesis is how to deal with this uncertainty in spoken dialogue systems: how to detect errors in speech recognition results, how to recover from non-understanding, how to choose when to engage in grounding, how to model the grounding process, how to realise grounding utterances and how to detect and repair misunderstandings. The approach in this thesis is to explore and draw lessons from human error handling, and to study how error handling may be performed in different parts of a complete spoken dialogue system. These studies are divided into three parts. In the first part, an experimental setup is presented in which a speech recogniser is used to induce errors in human-human dialogues. The results show that, unlike the behaviour of most dialogue systems, humans tend to employ other strategies than encouraging the interlocutor to repeat when faced with non-understandings. The collected data is also used in a follow-up experiment to explore which factors humans may benefit from when detecting errors in speech recognition results. Two machine learning algorithms are also used for the task. In the second part, the spoken dialogue system HIGGINS is presented, including the robust semantic interpreter PICKERING and the error aware discourse modeller GALATEA. It is shown how grounding is modelled and error handling is performed on the concept level. The system may choose to display its understanding of individual concepts, pose fragmentary clarification requests, or risk a misunderstanding and possibly detect and repair it at a later stage. An evaluation of the system with naive users indicates that the system performs well under error conditions. In the third part, models for choosing when to engage in grounding and how to realise grounding utterances are presented. A decision-theoretic, data-driven model for making grounding decisions is applied to the data from the evaluation of the HIGGINS system. Finally, two experiments are presented, which explore how the intonation of synthesised fragmentary grounding utterances affect their pragmatic meaning. The primary target of this thesis is the management of uncertainty, grounding and miscommunication in conversational dialogue systems, which to a larger extent build upon the principles of human conversation. However, many of the methods, models and results presented should also be applicable to dialogue systems in general. / QC 20100812
10

Using Unsupervised Morphological Segmentation to Improve Dependency Parsing for Morphologically Rich Languages

Yusupujiang, Zulipiye January 2018 (has links)
In this thesis, we mainly investigate the influence of using unsupervised morphological segmentation as features on the dependency parsing of morphologically rich languages such as Finnish, Estonian, Hungarian, Turkish, Uyghur, and Kazakh. Studying the morphology of these languages is of great importance for the dependency parsing of morphologically rich languages since dependency relations in a sentence of these languages mostly rely on morphemes rather than word order. In order to investigate our research questions, we have conducted a large number of parsing experiments both on MaltParser and UDPipe. We have generated the supervised morphology and the predicted POS tags from UDPipe, and obtained the unsupervised morphological segmentation from Morfessor, and have converted the unsupervised morphological segmentation into features and added them to the UD treebanks of each language. We have also investigated the different ways of converting the unsupervised segmentation into features and studied the result of each method. We have reported the Labeled Attachment Score (LAS) for all of our experimental results. The main finding of this study is that dependency parsing of some languages can be improved simply by providing unsupervised morphology during parsing if there is no manually annotated or supervised morphology available for such languages. After adding unsupervised morphological information with predicted POS tags, we get improvement of 4.9%, 6.0%, 8.7%, 3.3%, 3.7%, and 12.0% on the test set of Turkish, Uyghur, Kazakh, Finnish, Estonian, and Hungarian respectively on MaltParser, and the parsing accuracies have been improved by 2.7%, 4.1%, 8.2%, 2.4%, 1.6%, and 2.6% on the test set of Turkish, Uyghur, Kazakh, Finnish, Estonian, and Hungarian respectively on UDPipe when comparing the results from the models which do not use any morphological information during parsing.

Page generated in 0.0502 seconds