• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 156
  • 38
  • 21
  • 13
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 300
  • 300
  • 108
  • 77
  • 61
  • 56
  • 56
  • 54
  • 49
  • 47
  • 46
  • 42
  • 35
  • 32
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

On-demand Development of Statistical Machine Translation Systems / Développement à la demande des systèmes de traduction automatique statistiques

Gong, Li 25 November 2014 (has links)
La traduction automatique statistique produit des résultats qui en font un choix privilégié dans la plupart des scénarios de traduction assistée par ordinateur.Cependant, le développement de ces systèmes de haute performance implique des traitements très coûteux sur des données à grande échelle. De nouvelles données sont continuellement disponibles,alors que les systèmes construits de manière standard sont statiques, ce qui rend l'utilisation de nouvelles données couteuse car les systèmes sont typiquement reconstruits en intégralité.En outre, le processus d'adaptation des systèmes de traduction est généralement fondé sur un corpus de développement et est effectué une fois pour toutes. Dans cette thèse, nous proposons un cadre informatique pour répondre à ces trois problèmes conjointement. Ce cadre permet de développer des systèmes de traduction à la demande avec des mises à jour incrémentales et permet d’adapter les systèmes construits à chaque nouveau texte à traduire.La première contribution importante de cette thèse concerne une nouvelle méthode d'alignement sous-phrastique qui peut aligner des paires de phrases en isolation. Cette propriété permet aux systèmes de traduction de calculer des informations à la demande afin d'intégrer de façon transparente de nouvelles données disponibles sans re-entraînement complet des systèmes.La deuxième contribution importante de cette thèse est de proposer l'intégration de stratégies d'échantillonnage contextuel pour sélectionner des exemples de traduction à partir de corpus à grande échelle sur la base de leur similarité avec le texte à traduire afin d obtenir des tables de traduction adaptées / Statistical Machine Translation (SMT) produces results that make it apreferred choice in most machine-assisted translation scenarios.However,the development of such high-performance systems involves thecostly processing of very large-scale data. New data are constantly madeavailable while the constructed SMT systems are usually static, so thatincorporating new data into existing SMT systems imposes systemdevelopers to re-train systems from scratch. In addition, the adaptationprocess of SMT systems is typically based on some available held-outdevelopment set and is performed once and for all.In this thesis, wepropose an on-demand framework that tackles the 3 above problemsjointly, to enable to develop SMT systems on a per-need with incremental updates and to adapt existing systems to each individual input text.The first main contribution of this thesis is devoted to a new on-demandword alignment method that aligns training sentence pairs in isolation.This property allows SMT systems to compute information on a per-needbasis and to seamlessly incorporate new available data into an exiting SMT system without re-training the whole systems. The second maincontribution of this thesis is the integration of contextual sampling strategies to select translation examples from large-scale corpora that are similar to the input text so as to build adapted phrase tables
152

Aperfeiçoamento de um tradutor automático Português-Inglês: tempos verbais / Development of a Portuguese-to-English machine translation system: tenses

Silva, Lucia Helena Rozario da 03 August 2010 (has links)
Esta dissertação apresenta o aperfeiçoamento de um sistema de tradução automática português-inglês. Nosso objetivo principal é criar regras de transferência estrutural entre o par de línguas português e inglês e avaliar, através do uso da métrica de avaliação METEOR, o desempenho do sistema. Para isto, utilizamos um corpus teste criado especialmente para esta pesquisa. Tendo como ponto de partida a relevância de uma correta tradução para os tempos verbais de uma sentença, este trabalho priorizou a criação de regras que tratassem a transferência entre os tempos verbais do português brasileiro para o inglês americano. Devido ao fato de os verbos em português estarem distribuídos por três conjugações, criamos um corpus para cada uma dessas conjugações. O objetivo da criação desses corpora é verificar a aplicação das regras de transferência estrutural entre os tempos verbais em todas as três classes de conjugação. Após a criação dos corpora, mapeamos os tempos verbais em português no modo indicativo, subjuntivo e imperativo para os tempos verbais do inglês. Em seguida, iniciamos a construção das regras de transferência estrutural entre os tempos verbais mapeados. Ao final da construção das regras, submetemos os corpora obedecendo as três classes de conjugação à métrica de avaliação automática METEOR. Os resultados da avaliação do sistema após a inserção das regras apresentaram uma regressão quando comparado a avaliação do sistema no estágio inicial da pesquisa. Detectamos, através de análises dos resultados, que a métrica de avaliação automática METEOR não foi sensível às modificações feitas no sistema, embora as regras criadas sigam a gramática tradicional da língua portuguesa e estejam sendo aplicadas a todas as três classes de conjugação. Apresentamos em detalhes o conjunto de regras sintáticas e os corpora utilizados neste estudo, e que acreditamos serem de utilidade geral para quaisquer sistemas de tradução automática entre o português brasileiro e o inglês americano. Outra contribuição deste trabalho está em discutir os valores apresentados pela métrica METEOR e sugerir que novos ajustes sejam feitos a esses parâmetros utilizados pela métrica. / This dissertation presents the development of a Portuguese-to-English Machine Translation system. Our main objective is creating structural transfer rules between this pair of languages, and evaluate the performance of the system using the METEOR evaluation metric. Therefore, we developed a corpus to enable this study. Taking translation relevance as a starting point, we focused on verbal tenses and developed rules that dealt with transfer between verbal tenses from the Brazilian Portuguese to US English. Due to the fact that verbs in Portuguese are distributed in three conjugations, we created one corpus for each of these conjugations. The objective was to verify the application of structural transfer rules between verbal tenses in each conjugation class in isolation. After creating these corpora, we mapped the Portuguese verbal tenses in the indicative, subjunctive and imperative modes to English. Next, we constructed structural transfer rules to these mapped verbal tenses. After constructing these rules, we evaluated our corpora using the METEOR evaluation metric. The results of this evaluation showed lack of improvement after the insertion of these transfer rules, when compared to the initial stage of the system. We detected that the METEOR evaluation metric was not sensible to these modi_cations made to the system, even though they were linguistically sound and were being applied correctly to the sentences. We introduce in details the set of transfer rules and corpora used in this study, and we believe they are general enough to be useful in any rule-based Portuguese-to-English Machine Translation system. Another contribution of this work lies in the discussion of the results presented by the METEOR metric. We suggest adjustments to be made to its parameters, in order to make it more sensible to sentences variation such as those introduced by our rules.
153

Mémoires partagées d’alignements sous-phrastiques bilingues / Mémoires partagées d’alignements sous-phrastiques bilingues

Segura, Johan 16 November 2012 (has links)
Cette thèse s'inscrit dans le cadre du traitement automatique du langage naturel, et traite plus précisément de l'alignement sous-phrastique bilingue classiquement lié à la traduction automatique statistique. Les travaux exposés s'en distinguent en proposant une mécanique évolutive à base d'exemples initiée par des annotateurs non-experts via une interface adaptée. L'approche est principalement motivée par la recherche d'une expressivité comparable à celle observée dans les alignements manuels. Une partie importante de ce travail consiste à définir un cadre formel sous-tendant une architecture originale à base d'exemples alignés. Plusieurs mémoires d'alignements ont été constituées en tirant parti d'informations provenant d'analyseurs syntaxiques automatiques en plaçant les prérequis technologiques à un niveau raisonnablement peu élevé. Deux nouvelles méthodes d'alignement sont comparées à des références connues via des mesures d'accord classiques et trois distances transformationnelles sont introduites. / This research belongs to the Natural Language Processing (NLP) field and more specifically focuses on topic Sub-sentential Alignment which is closely related to Machine Translation. The originality of this work consists in an example-based approach bootstrapped by the participation of non-expert annotators through an appropriate interface. Seeking for a greater expressivity, such as observed in manual alignments, mainly motivates the whole approach. An important effort has been made to define a formal environment for this original architecture based on aligned examples. Several memories have been created using syntactic informations from parsers' outputs with reasonnable low-tech requirements. A couple of new alignment methods were compared with state-of-the-art measures and three transformational metrics were introduced.
154

Multimodal Machine Translation / Traduction Automatique Multimodale

Caglayan, Ozan 27 August 2019 (has links)
La traduction automatique vise à traduire des documents d’une langue à une autre sans l’intervention humaine. Avec l’apparition des réseaux de neurones profonds (DNN), la traduction automatique neuronale(NMT) a commencé à dominer le domaine, atteignant l’état de l’art pour de nombreuses langues. NMT a également ravivé l’intérêt pour la traduction basée sur l’interlangue grâce à la manière dont elle place la tâche dans un cadre encodeur-décodeur en passant par des représentations latentes. Combiné avec la flexibilité architecturale des DNN, ce cadre a aussi ouvert une piste de recherche sur la multimodalité, ayant pour but d’enrichir les représentations latentes avec d’autres modalités telles que la vision ou la parole, par exemple. Cette thèse se concentre sur la traduction automatique multimodale(MMT) en intégrant la vision comme une modalité secondaire afin d’obtenir une meilleure compréhension du langage, ancrée de façon visuelle. J’ai travaillé spécifiquement avec un ensemble de données contenant des images et leurs descriptions traduites, où le contexte visuel peut être utile pour désambiguïser le sens des mots polysémiques, imputer des mots manquants ou déterminer le genre lors de la traduction vers une langue ayant du genre grammatical comme avec l’anglais vers le français. Je propose deux approches principales pour intégrer la modalité visuelle : (i) un mécanisme d’attention multimodal qui apprend à prendre en compte les représentations latentes des phrases sources ainsi que les caractéristiques visuelles convolutives, (ii) une méthode qui utilise des caractéristiques visuelles globales pour amorcer les encodeurs et les décodeurs récurrents. Grâce à une évaluation automatique et humaine réalisée sur plusieurs paires de langues, les approches proposées se sont montrées bénéfiques. Enfin,je montre qu’en supprimant certaines informations linguistiques à travers la dégradation systématique des phrases sources, la véritable force des deux méthodes émerge en imputant avec succès les noms et les couleurs manquants. Elles peuvent même traduire lorsque des morceaux de phrases sources sont entièrement supprimés. / Machine translation aims at automatically translating documents from one language to another without human intervention. With the advent of deep neural networks (DNN), neural approaches to machine translation started to dominate the field, reaching state-ofthe-art performance in many languages. Neural machine translation (NMT) also revived the interest in interlingual machine translation due to how it naturally fits the task into an encoder-decoder framework which produces a translation by decoding a latent source representation. Combined with the architectural flexibility of DNNs, this framework paved the way for further research in multimodality with the objective of augmenting the latent representations with other modalities such as vision or speech, for example. This thesis focuses on a multimodal machine translation (MMT) framework that integrates a secondary visual modality to achieve better and visually grounded language understanding. I specifically worked with a dataset containing images and their translated descriptions, where visual context can be useful forword sense disambiguation, missing word imputation, or gender marking when translating from a language with gender-neutral nouns to one with grammatical gender system as is the case with English to French. I propose two main approaches to integrate the visual modality: (i) a multimodal attention mechanism that learns to take into account both sentence and convolutional visual representations, (ii) a method that uses global visual feature vectors to prime the sentence encoders and the decoders. Through automatic and human evaluation conducted on multiple language pairs, the proposed approaches were demonstrated to be beneficial. Finally, I further show that by systematically removing certain linguistic information from the input sentences, the true strength of both methods emerges as they successfully impute missing nouns, colors and can even translate when parts of the source sentences are completely removed.
155

Otto Kade a jeho přínos translatologii / Otto Kade and his Contribution to Translation Studies

Benešová, Rút January 2019 (has links)
This theoretical and biographical thesis deals with the work of Otto Kade, a major German Translation Studies scholar. It is based on an analysis of his monographs and articles and presents his most important ideas and contributions to the development of Translation Studies. The thesis describes the circumstances under which Kade's theory was created, and depicts his efforts to defend the existence of Translation Studies as an independent field of science - his endeavour to establish the subject of this discipline, make Translation Science more scientific, develop a consistent and innovative terminology and methodology, assess the social status of translators and interpreters, and systematise their education and didactics. Last but not least, the thesis demonstrates how wide in scope his reflections were, and also outlines the reception of Kade's concepts. Key words Otto Kade, Leipzig School, translation theory, equivalence types, machine translation
156

Statistical pattern recognition approaches for retrieval-based machine translation systems

Mansjur, Dwi Sianto 01 November 2011 (has links)
This dissertation addresses the problem of Machine Translation (MT), which is defined as an automated translation of a document written in one language (the source language) to another (the target language) by a computer. The MT task requires various types of knowledge of both the source and target language, e.g., linguistic rules and linguistic exceptions. Traditional MT systems rely on an extensive parsing strategy to decode the linguistic rules and use a knowledge base to encode those linguistic exceptions. However, the construction of the knowledge base becomes an issue as the translation system grows. To overcome this difficulty, real translation examples are used instead of a manually-crafted knowledge base. This design strategy is known as the Example-Based Machine Translation (EBMT) principle. Traditional EBMT systems utilize a database of word or phrase translation pairs. The main challenge of this approach is the difficulty of combining the word or phrase translation units into a meaningful and fluent target text. A novel Retrieval-Based Machine Translation (RBMT) system, which uses a sentence-level translation unit, is proposed in this study. An advantage of using the sentence-level translation unit is that the boundary of a sentence is explicitly defined and the semantic, or meaning, is precise in both the source and target language. The main challenge of using a sentential translation unit is the limited coverage, i.e., the difficulty of finding an exact match between a user query and sentences in the source database. Using an electronic dictionary and a topic modeling procedure, we develop a procedure to obtain clusters of sensible variations for each example in the source database. The coverage of our MT system improves because an input query text is matched against a cluster of sensible variations of translation examples instead of being matched against an original source example. In addition, pattern recognition techniques are used to improve the matching procedure, i.e., the design of optimal pattern classifiers and the incorporation of subjective judgments. A high performance statistical pattern classifier is used to identify the target sentences from an input query sentence in our MT system. The proposed classifier is different from the conventional classifier in terms of the way it addresses the generalization capability. A conventional classifier addresses the generalization issue using the parsimony principle and may encounter the possibility of choosing an oversimplified statistical model. The proposed classifier directly addresses the generalization issue in terms of training (empirical) data. Our classifier is expected to generalize better than the conventional classifiers because our classifier is less likely to use over-simplified statistical models based on the available training data. We further improve the matching procedure by the incorporation of subjective judgments. We formulate a novel cost function that combines subjective judgments and the degree of matching between translation examples and an input query. In addition, we provide an optimization strategy for the novel cost function so that the statistical model can be optimized according to the subjective judgments.
157

Discriminative Alignment Models For Statistical Machine Translation

Tomeh, Nadi 27 June 2012 (has links) (PDF)
Bitext alignment is the task of aligning a text in a source language and its translation in the target language. Aligning amounts to finding the translational correspondences between textual units at different levels of granularity. Many practical natural language processing applications rely on bitext alignments to access the rich linguistic knowledge present in a bitext. While the most predominant application for bitexts is statistical machine translation, they are also used in multilingual (and monolingual) lexicography, word sense disambiguation, terminology extraction, computer-aided language learning andtranslation studies, to name a few.Bitext alignment is an arduous task because meaning is not expressed seemingly across languages. It varies along linguistic properties and cultural backgrounds of different languages, and also depends on the translation strategy that have been used to produce the bitext.Current practices in bitext alignment model the alignment as a hidden variable in the translation process. In order to reduce the complexity of the task, such approaches suppose that a word in the source sentence is aligned to one word at most in the target sentence.However, this over-simplistic assumption results in asymmetric, one-to-many alignments, whereas alignments are typically symmetric and many-to-many.To achieve symmetry, two one-to-many alignments in opposite translation directions are built and combined using a heuristic.In order to use these word alignments in phrase-based translation systems which use phrases instead of words, a heuristic is used to extract phrase pairs that are consistent with the word alignment.In this dissertation we address both the problems of word alignment and phrase pairs extraction.We improve the state of the art in several ways using discriminative learning techniques.We present a maximum entropy (MaxEnt) framework for word alignment.In this framework, links are predicted independently from one another using a MaxEnt classifier.The interaction between alignment decisions is approximated using stackingtechniques, which allows us to account for a part of the structural dependencies without increasing the complexity. This formulation can be seen as an alignment combination method,in which the union of several input alignments is used to guide the output alignment. Additionally, input alignments are used to compute a rich set of feature functions.Our MaxEnt aligner obtains state of the art results in terms of alignment quality as measured by thealignment error rate, and translation quality as measured by BLEU on large-scale Arabic-English NIST'09 systems.We also present a translation quality informed procedure for both extraction and evaluation of phrase pairs. We reformulate the problem in the supervised framework in which we decide for each phrase pair whether we keep it or not in the translation model. This offers a principled way to combine several features to make the procedure more robust to alignment difficulties. We use a simple and effective method, based on oracle decoding,to annotate phrase pairs that are useful for translation. Using machine learning techniques based on positive examples only,these annotations can be used to learn phrase alignment decisions. Using this approach we obtain improvements in BLEU scores for recall-oriented translation models, which are suitable for small training corpora.
158

Collocation Segmentation for Text Chunking / Teksto skaidymas pastoviųjų junginių segmentais

Daudaravičius, Vidas 04 February 2013 (has links)
Segmentation is a widely used paradigm in text processing. Rule-based, statistical and hybrid methods are employed to perform the segmentation. This dissertation introduces a new type of segmentation - collocation segmentation - and a new method to perform it, and applies them to three different text processing tasks. In lexicography, collocation segmentation makes possible the use of large corpora to evaluate the usage and importance of terminology over time. Text categorization results can be improved using collocation segmentation. The study shows that collocation segmentation, without any other language resources, achieves better results than the widely used n-gram techniques together with POS (Part-of-Speech) processing tools. Also, the preprocessing of data with collocation segmentation and subsequent integration of these segments into a Statistical Machine Translation system improves the translation results. Diverse word combinability measures variously influence the final collocation segmentation and, thus, the translation results. The new collocation segmentation method is simple, efficient and applicable to language processing for diverse applications. / Teksto skaidymo įvairaus tipo segmentais metodai yra plačiai naudojami teksto apdorojimui. Segmentuojant naudojami tiek statistiniai, tiek formalieji metodai. Disertacijoje pristatomas naujas segmentavimo tipas ir metodas - segmentavimas pastoviaisiais junginiais - ir pateikiami taikymai įvairiose teksto apdorojimo srityse. Taikant pastoviųjų junginių segmentavimą leksikografijoje atskleidžiama, kaip objektyviai ir greitai galima analizuoti labai didelius tekstų archyvus aptinkant vartojamą terminiją ir šių automatiškai identifikuotų terminų svarbumą ir kaitą laiko tėkmėje. Ši analizė leidžia greitai nustatyti svarbius metodologinius pokyčius mokslinių tyrimų istorijoje ir nustatyti pastarojo meto aktualias tyrimų sritis. Tekstų klasifikavimo taikyme atskleidžiama, kaip taikant segmentavimą pastoviaisiais junginiais galima pagerinti tekstų klasifikavimo rezultatus. Taip pat, pasitelkiant segmentavimą pastoviaisiais junginiais, atskleidžiama, kad nežymiai galima pagerinti statistinio mašininio vertimo kokybę, ir atskleidžiama įvairių žodžių junglumo įverčių įtaka segmentavimui pastoviaisiais junginiais. Naujas teksto skaidymo pastoviaisiais junginiais metodas atskleidžia naujas galimybes gerinti teksto apdorojimo rezultatus įvairiuose taikymuose ir įvairiose kalbose.
159

Teksto skaidymas pastoviųjų junginių segmentais / Collocation segmentation for text chunking

Daudaravičius, Vidas 04 February 2013 (has links)
Teksto skaidymo įvairaus tipo segmentais metodai yra plačiai naudojami teksto apdorojimui. Segmentuojant naudojami tiek statistiniai, tiek formalieji metodai. Disertacijoje pristatomas naujas segmentavimo tipas ir metodas - segmentavimas pastoviaisiais junginiais - ir pateikiami taikymai įvairiose teksto apdorojimo srityse. Taikant pastoviųjų junginių segmentavimą leksikografijoje atskleidžiama, kaip objektyviai ir greitai galima analizuoti labai didelius tekstų archyvus aptinkant vartojamą terminiją ir šių automatiškai identifikuotų terminų svarbumą ir kaitą laiko tėkmėje. Ši analizė leidžia greitai nustatyti svarbius metodologinius pokyčius mokslinių tyrimų istorijoje ir nustatyti pastarojo meto aktualias tyrimų sritis. Tekstų klasifikavimo taikyme atskleidžiama, kaip taikant segmentavimą pastoviaisiais junginiais galima pagerinti tekstų klasifikavimo rezultatus. Taip pat, pasitelkiant segmentavimą pastoviaisiais junginiais, atskleidžiama, kad nežymiai galima pagerinti statistinio mašininio vertimo kokybę, ir atskleidžiama įvairių žodžių junglumo įverčių įtaka segmentavimui pastoviaisiais junginiais. Naujas teksto skaidymo pastoviaisiais junginiais metodas atskleidžia naujas galimybes gerinti teksto apdorojimo rezultatus įvairiuose taikymuose ir įvairiose kalbose. / Segmentation is a widely used paradigm in text processing. Rule-based, statistical and hybrid methods are employed to perform the segmentation. This dissertation introduces a new type of segmentation - collocation segmentation - and a new method to perform it, and applies them to three different text processing tasks. In lexicography, collocation segmentation makes possible the use of large corpora to evaluate the usage and importance of terminology over time. Text categorization results can be improved using collocation segmentation. The study shows that collocation segmentation, without any other language resources, achieves better results than the widely used n-gram techniques together with POS (Part-of-Speech) processing tools. Also, the preprocessing of data with collocation segmentation and subsequent integration of these segments into a Statistical Machine Translation system improves the translation results. Diverse word combinability measures variously influence the final collocation segmentation and, thus, the translation results. The new collocation segmentation method is simple, efficient and applicable to language processing for diverse applications.
160

英文介系詞片語定位與英文介系詞推薦 / Attachment of English prepositional phrases and suggestions of English prepositions

蔡家琦, Tsai, Chia Chi Unknown Date (has links)
英文介系詞在句子裡所扮演的角色通常是用來使介系詞片語更精確地補述上下文,英文的母語使用者可以很直覺地使用。然而電腦不瞭解語義,因此不容易判斷介系詞修飾對象;非英文母語使用者則不容易直覺地使用正確的介系詞。所以本研究將專注於介系詞片語定位與介系詞推薦的議題。 在本研究將這二個介系詞議題抽象化為一個決策問題,並提出一個一般化的解決方法。這二個問題共通的部分在於動詞片語,一個簡單的動詞片語含有最重要的四個中心詞(headword):動詞、名詞一、介系詞和名詞二。由這四個中心詞做為出發點,透過WordNet做階層式的選擇,在大量的案例中尋找語義上共通的部分,再利用機器學習的方法建構一般化的模型。此外,針對介系詞片語定的問題,我們挑選較具挑戰性介系詞做實驗。 藉由使用真實生活語料,我們的方法處理介系詞片語定位的問題,比同樣考慮四個中心詞的最大熵值法(Max Entropy)好;但與考慮上下文的Stanford剖析器差不多。而在介系詞推薦的問題裡,較難有全面比較的對象,但我們的方法精準度可達到53.14%。 本研究發現,高層次的語義可以使分類器有不錯的分類效果,而透過階層式的選擇語義能使分類效果更佳。這顯示我們確實可以透過語義歸納一套準則,用於這二個介系詞的議題。相信成果在未來會對機器翻譯與文本校對的相關研究有所價值。 / This thesis focuses on problems of attachment of prepositional phrases (PPs) and problems of prepositional suggestions. Determining the correct PP attachment is not easy for computers. Using correct prepositions is not easy for learners of English as a second language. I transform the problems of PPs attachment and prepositional suggestion into an abstract model, and apply the same computational procedures to solve these two problems. The common model features four headwords, i.e., the verb, the first noun, the preposition, and the second noun in the prepositional phrases. My methods consider the semantic features of the headwords in WordNet to train classification models, and apply the learned models for tackling the attachment and suggestion problems. This exploration of PP attachment problems is special in that only those PPs that are almost equally possible to attach to the verb and the first noun were used in the study. The proposed models consider only four headwords to achieve satisfactory performances. In experiments for PP attachment, my methods outperformed a Maximum Entropy classifier which also considered four headwords. The performances of my methods and of the Stanford parsers were similar, while the Stanford parsers had access to the complete sentences to judge the attachments. In experiments for prepositional suggestions, my methods found the correct prepositions 53.14% of the time, which is not as good as the best performing system today. This study reconfirms that semantic information is instrument for both PP attachment and prepositional suggestions. High level semantic information helped to offer good performances, and hierarchical semantic synsets helped to improve the observed results. I believe that the reported results are valuable for future studies of PP attachment and prepositional suggestions, which are key components for machine translation and text proofreading.

Page generated in 0.081 seconds