• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 339
  • 47
  • 1
  • Tagged with
  • 387
  • 324
  • 320
  • 320
  • 320
  • 320
  • 320
  • 87
  • 79
  • 70
  • 65
  • 59
  • 57
  • 56
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Blending Words or: How I Learned to Stop Worrying and Love the Blendguage : A computational study of lexical blending in Swedish

Ek, Adam January 2018 (has links)
This thesis investigates Swedish lexical blends. A lexical blend is defined as the concatenation of two words, where at least one word has been reduced. Lexical blends are approached from two perspectives. First, the thesis investigates lexical blends as they appear in the Swedish language. It is found that there is a significant statistical relationship between the two source words in terms of orthographic, phonemic and syllabic length and frequency in a reference corpus. Furthermore, some uncommon lexical blends created from pronouns and interjections are described. A description of lexical blends through semantic construction and similarity to other word formation processes are also described. Secondly, the thesis develops a model which predicts source words of lexical blends. To predict the source words a logistic regression model is used. The evaluation shows that using a ranking approach, the correct source words are the highest ranking word pair in 32.2% of the cases. In the top 10 ranking word pairs, the correct word pair is found in 60.6% of the cases. The results are lower than in previous studies, but the number of blends used is also smaller. It is shown that lexical blends which overlap are easier to predict than lexical blends which do not overlap. Using feature ablation, it is shown that semantic and frequency related features have the most important for the prediction of source words.
92

Unsupervised Normalisation of Historical Spelling : A Multilingual Evaluation

Bergman, Nicklas January 2018 (has links)
Historical texts are an important resource for researchers in the humanities. However, standard NLP tools typically perform poorly on them, mainly due to the spelling variations present in such texts. One possible solution is to normalise the spelling variations to equivalent contemporary word forms before using standard tools. Weighted edit distance has previously been used for such normalisation, improving over the results of algorithms based on standard edit distance. Aligned training data is needed to extract weights, but there is a lack of such data. An unsupervised method for extracting edit distance weights is therefore desirable. This thesis presents a multilingual evaluation of an unsupervised method for extracting edit distance weights for normalisation of historical spelling variations. The model is evaluated for English, German, Hungarian, Icelandic and Swedish. The results are mixed and show a high variance depending on the different data sets. The method generally performs better than normalisation basedon standard edit distance but as expected does not quite reach up to the results of a model trained on aligned data. The results show an increase in normalisation accuracy compared to standard edit distance normalisation for all languages except German, which shows a slightly reduced accuracy, and Swedish, which shows similar results to the standard edit distance normalisation.
93

Depending on VR : Rule-based Text Simplification Based on Dependency Relations

Johansson, Vida January 2017 (has links)
The amount of text that is written and made available increases all the time. However, it is not readily accessible to everyone. The goal of the research presented in this thesis was to develop a system for automatic text simplification based on dependency relations, develop a set of simplification rules for the system, and evaluate the performance of the system. The system was built on a previous tool and developments were made to ensure the that the system could perform the operations necessary for the rules included in the rule set. The rule set was developed by manual adaption of the rules to a set of training texts. The evaluation method used was a classification task with both objective measures (precision and recall) and a subjective measure (correctness). The performance of the system was compared to that of a system based on constituency relations. The results showed that the current system scored higher on both precision (96% compared to 82%) and recall (86% compared to 53%), indicating that the syntactic information dependency relations provide is sufficient to perform text simplification. Further evaluation should account for how helpful the text simplification produced by the current system is for target readers.
94

Thoughts don't have Colour, do they? : Finding Semantic Categories of Nouns and Adjectives in Text Through Automatic Language Processing / Generering av semantiska kategorier av substantiv och adjektiv genom automatisk textbearbetning

Fallgren, Per January 2017 (has links)
Not all combinations of nouns and adjectives are possible and some are clearly more fre- quent than other. With this in mind this study aims to construct semantic representations of the two types of parts-of-speech, based on how they occur with each other. By inves- tigating these ideas via automatic natural language processing paradigms the study aims to find evidence for a semantic mutuality between nouns and adjectives, this notion sug- gests that the semantics of a noun can be captured by its corresponding adjectives, and vice versa. Furthermore, a set of proposed categories of adjectives and nouns, based on the ideas of Gärdenfors (2014), is presented that hypothetically are to fall in line with the produced representations. Four evaluation methods were used to analyze the result rang- ing from subjective discussion of nearest neighbours in vector space to accuracy generated from manual annotation. The result provided some evidence for the hypothesis which suggests that further research is of value.
95

Dependency Parsing and Dialogue Systems : an investigation of dependency parsing for commercial application

Adams, Allison January 2017 (has links)
In this thesis, we investigate dependency parsing for commercial application, namely for future integration in a dialogue system. To do this, we conduct several experiments on dialogue data to assess parser performance on this domain, and to improve this performance over a baseline. This work makes the following contributions: first, the creation and manual annotation of a gold-standard data set for dialogue data; second, a thorough error analysis of the data set, comparing neural network parsing to traditional parsing methods on this domain; and finally, various domain adaptation experiments show how parsing on this data set can be improved over a baseline.  We further show that dialogue data is characterized by questions in particular, and suggest a method for improving overall parsing on these constructions.
96

Större chans att klara det? : En specialpedagogisk studie av 10 ungdomars syn på hur datorstöd har påverkat deras språk, lärande och skolsituation.

Hansson, Britt January 2008 (has links)
I studien intervjuades 10 ungdomar om sina erfarenheter av att använda dator med talsyntes och inspelade böcker. De tillfrågades om i vilka situationer verktygen har kommit till nytta eller upplevts hämmande i deras lärande och skolsituation. På grund av stora skolsvårigheter har ungdomarna fått låna en bärbar dator av skolan. Den har de använt både hemma och i skolan. Tillsammans med föräldrar och lärare har de fått handledning vid kommunens Skoldatatek. Att språket utvecklas när det används har varit utgångspunkt i studien, ur ett sociokulturellt perspektiv. Skolan ska erbjuda en tidsenlig utbildning och elever i skolsvårigheter har rätt att få stöd. Hur detta stöd ska utformas kan skapa ett dilemma på den enskilda skolan. Ett stöd riktat direkt till den enskilde kan nämligen uppfattas som att skolsvårigheter ses som en elevburen problematik, vilket inte får förekomma i ”en skola för alla”. Med tanke på detta dilemma var det viktigt att efterforska ungdomarnas upplevelser av stöd, utveckling och hinder, för att förstå om de orsakar utpekande och exkludering. Resultatet visade att ungdomarna upplevde att de kände sig mer motiverade med sina datorverktyg, som har kompenserat deras svårigheter och tilltalat deras olika lärstilar. Ungdomarna sade sig ha blivit säkrare skribenter och läsare tack vare ökat språkbruk. I deras berättelse framgår även nödvändigheten av stöd från lärare och föräldrar. Resultatet pekar på att alternativa verktyg i lärandet skulle kunna medverka till större måluppfyllelse i en skola för alla, med pedagogisk mångfald.
97

Utveckling av ett svensk-engelskt lexikon inom tåg- och transportdomänen

Axelsson, Hans, Blom, Oskar January 2006 (has links)
This paper describes the process of building a machine translation lexicon for use in the train and transport domain with the machine translation system MATS. The lexicon will consist of a Swedish part, an English part and links between them and is derived from a Trados translation memory which is split into a training(90%) part and a testing(10%) part. The task is carried out mainly by using existing word linking software and recycling previous machine translation lexicons from other domains. In order to do this, a method is developed where focus lies on automation by means of both existing and self developed software, in combination with manual interaction. The domain specific lexicon is then extended with a domain neutral core lexicon and a less domain neutral general lexicon. The different lexicons are automatically and manually evaluated through machine translation on the test corpus. The automatic evaluation of the largest lexicon yielded a NEVA score of 0.255 and a BLEU score of 0.190. The manual evaluation saw 34% of the segments correctly translated, 37%, although not correct, perfectly understandable and 29% difficult to understand.
98

Grundtonsstrategier vid tonlösa segment

von Kartaschew, Filip January 2007 (has links)
Prosodimodeller som bl.a. kan användas i talsynteser grundar sig ofta på analyser av tal som består av enbart tonande segment. Framför tonlös konsonant saknar vokalsegments grundtonskurvor möjlig fortsättning och blir dessutom kortare. Detta brukar då justeras med hjälp av trunkering av grundtonskurvan. Tidigare studier har i korthet visat att skillnader, förutom trunkering, i vokalers grundtonskurva kan uppstå beroende på om efterföljande segment är tonande eller tonlöst. Med utgångspunkt från dessa studier undersöks i detta examensarbete grundtonskurvan i svenska satser. Även resultaten i denna studie visar att olika strategier i grundtonskurvan används, och att trunkering inte räcker för att förklara vad som sker med grundtonskurvan i dessa kontexter. Generellt visar resultaten på att det verkar viktigt för försökspersonerna att behålla den information som grundtonskurvan ger i form av max- och minimumvärde, och att fall och stigningar så långt det går bibehålls.
99

Extracting social networks from fiction : Imaginary and invisible friends: Investigating the social world of imaginary friends.

Ek, Adam January 2017 (has links)
This thesis develops an approach to extract the social relation between characters in literary text to create a social network. The approach uses co-occurrences of named entities, keywords associated with the named entities, and the dependency relations that exist between the named entities to construct the network. Literary texts contain a large amount of pronouns to represent the named entities, to resolve the antecedents of pronouns, a pronoun resolution system is implemented based on a standard pronoun resolution algorithm. The results indicate that the pronoun resolution system finds the correct named entity in 60,4\% of all cases. The social network is evaluated by comparing character importance rankings based on graph properties with an independently human generated importance rankings. The generated social networks correlate moderately to strongly with the independent character ranking.
100

Automatisk utvinning av felaktigt särskrivna sammansättningar

Hedén, Sofia January 2017 (has links)
Denna uppsats beskriver en automatisk utvinning av särskrivningar som läggs i ett lexikon och implementeras i en redan existerande stavningskon- troll. Arbetet har utförts i samarbete med Svensk TalTeknologi. Många skribenter har svårt att förstå vilka fraser som ska skrivas samman och vilka fraser som kan stå isär. De datorstödda språkgranskningsprogram som finns för svenska idag har svårt att hantera både särskrivningar och sammansättningar vilket kan ge missvisande rekommendationer. Metoden som har utvecklats i detta arbete extraherar sammanslagna bigram från en icke normativ korpus som är 84,6 MB stor för att jäm- föra mot unigram från en normativ korpus som är 99,2 MB stor. Med begränsningar utvinns 2492 möjliga särskrivningar som påträffas i båda korpusarna och som läggs i ett lexikon. Lexikonets precision uppgår till 92 %. Stavningskontrollens täckning för felaktiga särskrivningar samt ord som det går bra att skriva både ihop och isär uppgår till 60,8 % medan täckningen för felaktiga särskrivningar uppgår till 41,6 %. Lexikonet visar hög noggrannhet och med enkla medel kan precisionen höjas ytterligare. Programmet presterar inte lika bra men med ett mer omfattande lexikon höjs även programmets prestation. / This thesis describes an automatic extraction of split compounds that are added in a lexicon and implemented in an already existing spell checker. The work has been performed in cooperation with Svensk TalTeknologi. Many writers have difficulties understanding what phrases should be writ- ten jointly and what phrases should be written separately. The computer assisted language editors that exist for Swedish today have difficulties dealing with erroneously split and joint compounds, which can result in misleading recommendations. The method that has been developed in this work extracts joint bigrams from a non-normative corpus that is 84,6 MB big to compare with unigrams from a normative corpus that is 99,2 MB big. With some limitations 2492 possible compounds that are found in both the corpora are extracted and put in a lexicon. The lexicon’s precision amounts to 92 %. The recall of the spell checker amounts to 60,8 % for both erroneously compounds and compounds that can be written jointly or separately, and to 41,6 % for erroneously split compounds. The lexicon presents high accuracy and with simple means the precision can be further increased. The spell checker’s achievement is not as good but with a more extensive lexicon the achievement of the program will increase as well.

Page generated in 0.0474 seconds