• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 552
  • 43
  • 40
  • 18
  • 13
  • 11
  • 8
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 806
  • 806
  • 567
  • 341
  • 326
  • 320
  • 320
  • 245
  • 207
  • 197
  • 130
  • 123
  • 116
  • 100
  • 89
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Compound Processing for Phrase-Based Statistical Machine Translation

Stymne, Sara January 2009 (has links)
<p>In this thesis I explore how compound processing can be used to improve phrase-based statistical machine translation (PBSMT) between English and German/Swedish. Both German and Swedish generally use closed compounds, which are written as one word without spaces or other indicators of word boundaries. Compounding is both common and productive, which makes it problematic for PBSMT, mainly due to sparse data problems.</p><p>The adopted strategy for compound processing is to split compounds into their component parts before training and translation. For translation into Swedish and German the parts are merged after translation. I investigate the effect of different splitting algorithms for translation between English and German, and of different merging algorithms for German. I also apply these methods to a different language pair, English--Swedish. Overall the studies show that compound processing is useful, especially for translation from English into German or Swedish. But there are improvements for translation into English as well, such as a reduction of unknown words.</p><p>I show that for translation between English and German different splitting algorithms work best for different translation directions. I also design and evaluate a novel merging algorithm based on part-of-speech matching, which outperforms previous methods for compound merging, showing the need for information that is carried through the translation process, rather than only external knowledge sources such as word lists. Most of the methods for compound processing were originally developed for German. I show that these methods can be applied to Swedish as well, with similar results.</p>
232

Mer lättläst : Påbyggnad av ett automatiskt omskrivningsverktyg till lätt svenska

Abrahamsson, Peder January 2011 (has links)
Det svenska språket ska finnas tillgängligt för alla som bor och verkar i Sverige. Därförär det viktigt att det finns lättlästa alternativ för dem som har svårighet att läsa svensktext. Detta arbete bygger vidare på att visa att det är möjligt att skapa ett automatisktomskrivningsprogram som gör texter mer lättlästa. Till grund för arbetet liggerCogFLUX som är ett verktyg för automatisk omskrivning till lätt svenska. CogFLUXinnehåller funktioner för att syntaktiskt skriva om texter till mer lättläst svenska.Omskrivningarna görs med hjälp av omskrivningsregler framtagna i ett tidigare projekt.I detta arbete implementeras ytterligare omskrivningsregler och även en ny modul förhantering av synonymer. Med dessa nya regler och modulen ska arbetet undersöka omdet är det är möjligt att skapa system som ger en mer lättläst text enligt etableradeläsbarhetsmått som LIX, OVIX och Nominalkvot. Omskrivningsreglerna ochsynonymhanteraren testas på tre olika texter med en total lägnd på ungefär hundra tusenord. Arbetet visar att det går att sänka både LIX-värdet och Nominalkvoten signifikantmed hjälp av omskrivningsregler och synonymhanterare. Arbetet visar även att det finnsfler saker kvar att göra för att framställa ett riktigt bra program för automatiskomskrivning till lätt svenska.
233

Probability as readability : A new machine learning approach to readability assessment for written Swedish / Sannolikhet som läsbarhet : En ny maskininlärningsansats till läsbarhetsmätning för skriven svenska

Sjöholm, Johan January 2012 (has links)
This thesis explores the possibility of assessing the degree of readability of writtenSwedish using machine learning. An application using four levels of linguistic analysishas been implemented and tested with four different established algorithmsfor machine learning. The new approach has then been compared to establishedreadability metrics for Swedish. The results indicate that the new method workssignificantly better for readability classification of both sentences and documents.The system has also been tested with so called soft classification which returns aprobability for the degree of readability of a given text. This probability can thenbe used to rank texts according to probable degree of readability. / Detta examensarbete utforskar möjligheterna att bedöma svenska texters läsbarhet med hjälp av maskininlärning. Ett system som använder fyra nivåer av lingvistisk analys har implementerats och testats med fyra olika etablerade algoritmer för maskininlärning. Det nya angreppssättet har sedan jämförts med etablerade läsbarhetsmått för svenska. Resultaten visar att den nya metoden fungerar markant bättre för läsbarhetsklassning av både meningar och hela dokument. Systemet har också testats med så kallad mjuk klassificering som ger ett sannolikhetsvärde för en given texts läsbarhetsgrad. Detta sannolikhetsvärde kan användas för rangordna texter baserad på sannolik läsbarhetsgrad.
234

Creation of a customised character recognition application

Sandgren, Frida January 2005 (has links)
This master’s thesis describes the work in creating a customised optical character recognition (OCR) application; intended for use in digitisation of theses submitted to the Uppsala University in the 18th and 19th centuries. For this purpose, an open source software called Gamera has been used for recognition and classification of the characters in the documents. The software provides specific algorithms for analysis of heritage documents and is designed to be used as a tool for creating domain-specific (i.e. customised) recognition applications. By using the Gamera classifier training interface, classifier data was created which reflects the characters in the particular theses. The data can then be used in automatic recognition of ‘new’ characters, by loading it into one of Gamera’s classifiers. The output of Gamera are sets of classified glyphs (i.e. small images of characters), stored in an XML-based format. However, as OCR typically involves translation of images of text into a machine-readable format, a complementary OCR-module was needed. For this purpose, an external Gamera module for page segmentation was modified and used. In addition, a script for control of the OCR-process was created, which initiates the page segmentation on Gamera classified glyphs. The result is written to text files. Finally, in a test for recognition accuracy, one of the theses was used for creation of training data and for test of data. The result from the test show an average accuracy rate of 82% and that there is a need for a better pre-processing module which removes more noise from the images, as well as recognises different character sizes in the images before they are run by the OCR-process.
235

Classification into Readability Levels : Implementation and Evaluation

Larsson, Patrik January 2006 (has links)
The use for a readability classification model is mainly as an integrated part of an information retrieval system. By matching the user's demands of readability to the documents with the corresponding readability, the classification model can further improve the results of, for example, a search engine. This thesis presents a new solution for classification into readability levels for Swedish. The results from the thesis are a number of classification models. The models were induced by training a Support Vector Machines classifier on features that are established by previous research as good measurements of readability. The features were extracted from a corpus annotated with three readability levels. Natural Language Processing tools for tagging and parsing were used to analyze the corpus and enable the extraction of the features from the corpus. Empirical testings of different feature combinations were performed to optimize the classification model. The classification models render a good and stable classification. The best model obtained a precision score of 90.21\% and a recall score of 89.56\% on the test-set, which is equal to a F-score of 89.88. / Uppsatsen beskriver utvecklandet av en klassificeringsmodell för Svenska texter beroende på dess läsbarhet. Användningsområdet för en läsbaretsklassificeringsmodell är främst inom informationssökningssystem. Modellen kan öka träffsäkerheten på de dokument som anses relevanta av en sökmotor genom att matcha användarens krav på läsbarhet med de indexerade dokumentens läsbarhet. Resultatet av uppsatsen är ett antal modeller för klassificering av text beroende på läsbarhet. Modellerna har tagits fram genom att träna upp en Support Vector Machines klassificerare, på ett antal särdrag som av tidigare forskning har fastslagits vara goda mått på läsbarhet. Särdragen extraherades från en korpus som är annoterad med tre läsbarhetsnivåer. Språkteknologiska verktyg för taggning och parsning användes för att möjliggöra extraktionen av särdragen. Särdragen utvärderades empiriskt i olika särdragskombinationer för att optimera modellerna. Modellerna testades och utvärderades med goda resultat. Den bästa modellen hade en precision på 90,21 och en recall på 89,56, detta ger en F-score som är 89,88. Uppsatsen presenterar förslag på vidareutveckling samt potentiella användningsområden.
236

Attitydanalys av svenska produktomdömen – behövs språkspecifika verktyg? / Sentiment Analysis of Swedish Product Reviews – Are Language-specific Tools Necessary?

Glant, Oliver January 2018 (has links)
Sentiment analysis of Swedish data is often performed using English tools and machine. This thesis compares using a neural network trained on Swedish data with a corresponding one trained on English data. Two datasets were used: approximately 200,000 non-neutral Swedish reviews from the company Prisjakt Sverige AB, one of the largest annotated datasets used for Swedish sentiment analysis, and 1,000,000 non-neutral English reviews from Amazon.com. Both networks were evaluated on 11,638 randomly selected reviews, in Swedish and in English machine translation. The test set had the same overrepresentation of positive reviews as the Swedish dataset (84% were positive). The results suggest that English tools can be used with machine translation for sentiment analysis of Swedish reviews, without loss of classification ability. However, the English tool required 33% more training data to achieve maximum performance. Evaluation on the unbalanced test set required extra consideration regarding statistical measures. F1-measure turned out to be reliable only when calculated for the underrepresented class. It then showed a strong correlation with the Matthews correlation coefficient, which has been found to be more reliable. This warrants further investigation into whether the correlation is valid for all different balances, which would simplify comparison between studies. / Attitydanalys av svensk data sker i många fall genom maskinöversättning till engelska för att använda tillgängliga analysverktyg. I den här uppsatsen undersöktes skillnaden mellan användning av ett neuronnät tränat på svensk data och av motsvarande neuronnät tränat på engelsk data. Två datamängder användes: cirka 200 000 icke-neutrala svenska produktomdömen från Prisjakt Sverige AB, en av de största annoterade datamängder som använts för svensk attitydanalys, och 1 000 000 icke-neutrala engelskaproduktomdömen från Amazon.com. Båda versionerna av neuronnätet utvärderades på 11 638 slumpmässigt utvalda svenska produktomdömen, i original och maskinöversatta till engelska. Testmängden hade samma överrepresentation av positiva omdömen som den svenska datamängden (84% positiva omdömen). Resultaten tyder på att engelska verktyg med hjälp av maskinöversättning kan användas för attitydanalys av svenska produktomdömen med bibehållen klassificeringsförmåga, dock krävdes cirka 33% större träningsdata för att det engelska verktyget skulle uppnå maximal klassificeringsförmåga. Utvärdering på den obalanserade datamängden visade sig ställa särskilda krav på de statistiska mått som användes. F1-värde fungerade tillfredsställande endast när det beräknades för den underrepresenterade klassen. Det korrelerade då starkt med Matthews korrelationskoefficient, som tidigare funnits vara ett pålitligare mått. Om korrelationen gäller vid alla olika balanser skulle jämförelser mellan olika studiers resultat underlättas, något som bör undersökas.
237

Natural language processing in cross-media analysis

Woldemariam, Yonas Demeke January 2018 (has links)
A cross-media analysis framework is an integrated multi-modal platform where a media resource containing different types of data such as text, images, audio and video is analyzed with metadata extractors, working jointly to contextualize the media resource. It generally provides cross-media analysis and automatic annotation, metadata publication and storage, searches and recommendation services. For on-line content providers, such services allow them to semantically enhance a media resource with the extracted metadata representing the hidden meanings and make it more efficiently searchable. Within the architecture of such frameworks, Natural Language Processing (NLP) infrastructures cover a substantial part. The NLP infrastructures include text analysis components such as a parser, named entity extraction and linking, sentiment analysis and automatic speech recognition. Since NLP tools and techniques are originally designed to operate in isolation, integrating them in cross-media frameworks and analyzing textual data extracted from multimedia sources is very challenging. Especially, the text extracted from audio-visual content lack linguistic features that potentially provide important clues for text analysis components. Thus, there is a need to develop various techniques to meet the requirements and design principles of the frameworks. In our thesis, we explore developing various methods and models satisfying text and speech analysis requirements posed by cross-media analysis frameworks. The developed methods allow the frameworks to extract linguistic knowledge of various types and predict various information such as sentiment and competence. We also attempt to enhance the multilingualism of the frameworks by designing an analysis pipeline that includes speech recognition, transliteration and named entity recognition for Amharic, that also enables the accessibility of Amharic contents on the web more efficiently. The method can potentially be extended to support other under-resourced languages.
238

A Random Indexing Approach to Unsupervised Selectional Preference Induction

Hägglöf, Hillevi, Tengstrand, Lisa January 2011 (has links)
A selectional preference is the relation between a head-word and plausible arguments of that head-word. Estimation of the association feature between these words is important to natural language processing applications such as Word Sense Disambiguation. This study presents a novel approach to selectional preference induction within a Random Indexing word space. This is a spatial representation of meaning where distributional patterns enable estimation of the similarity between words. Using only frequency statistics about words to estimate how strongly one word selects another, the aim of this study is to develop a flexible method that is not language dependent and does not require any annotated resourceswhich is in contrast to methods from previous research. In order to optimize the performance of the selectional preference model, experiments including parameter tuning and variation of corpus size were conducted. The selectional preference model was evaluated in a pseudo-word evaluation which lets the selectional preference model decide which of two arguments have a stronger correlation to a given verb. Results show that varying parameters and corpus size does not affect the performance of the selectional preference model in a notable way. The conclusion of the study is that the language modelused does not provide the adequate tools to model selectional preferences. This might be due to a noisy representation of head-words and their arguments.
239

Semi-Automatic Translation of Medical Terms from English to Swedish : SNOMED CT in Translation / Semiautomatisk översättning av medicinska termer från engelska till svenska : Översättning av SNOMED CT

Lindgren, Anna January 2011 (has links)
The Swedish National Board of Health and Welfare has been overseeing translations of the international clinical terminology SNOMED CT from English to Swedish. This study was performed to find whether semi-automatic methods of translation could produce a satisfactory translation while requiring fewer resources than manual translation. Using the medical English-Swedish dictionary TermColl translations of select subsets of SNOMED CT were produced by ways of translation memory and statistical translation. The resulting translations were evaluated via BLEU score using translations provided by the Swedish National Board of Health and Welfare as reference before being compared with each other. The results showed a strong advantage for statistical translation over use of a translation memory; however, overall translation results were far from satisfactory. / Den internationella kliniska terminologin SNOMED CT har översatts från engelska till svenska under ansvar av Socialstyrelsen. Den här studien utfördes för att påvisa om semiautomatiska översättningsmetoder skulle kunna utföra tillräckligt bra översättning med färre resurser än manuell översättning. Den engelsk-svenska medicinska ordlistan TermColl användes som bas för översättning av delmängder av SNOMED CT via översättnings­minne och genom statistisk översättning. Med Socialstyrelsens översättningar som referens poängsattes the semiautomatiska översättningarna via BLEU. Resultaten visade att statistisk översättning gav ett betydligt bättre resultat än översättning med översättningsminne, men över lag var resultaten alltför dåliga för att semiautomatisk översättning skulle kunna rekommenderas i detta fall.
240

Search Engine Optimization and the Long Tail of Web Search

Dennis, Johansson January 2016 (has links)
In the subject of search engine optimization, many methods exist and many aspects are important to keep in mind. This thesis studies the relation between keywords and website ranking in Google Search, and how one can create the biggest positive impact. Keywords with smaller search volume are called "long tail" keywords, and they bear the potential to expand visibility of the website to a larger crowd by increasing the rank of the website for the large fraction of keywords that might not be as common on their own, but together make up for a large amount of the total web searches. This thesis will analyze where on the web page these keywords should be placed, and a case study will be performed in which the goal is to increase the rank of a website with knowledge from previous tests in mind.

Page generated in 0.081 seconds