• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 11
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 53
  • 53
  • 53
  • 24
  • 23
  • 20
  • 18
  • 17
  • 17
  • 15
  • 14
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Μεθοδολογία αυτόματου σημασιολογικού σχολιασμού στο περιεχόμενο ιστοσελίδων

Σπύρος, Γεώργιος 14 December 2009 (has links)
Στις μέρες μας η χρήση του παγκόσμιου ιστού έχει εξελιχθεί σε ένα κοινωνικό φαινόμενο. Η εξάπλωσή του είναι συνεχής και εκθετικά αυξανόμενη. Στα χρόνια που έχουν μεσολαβήσει από την εμφάνισή του, οι χρήστες έχουν αποκτήσει ένα βαθμό εμπειρίας και έχει γίνει από πλευράς τους ένα σύνολο αποδοχών βασισμένων σε αυτή ακριβώς την εμπειρία από τη χρήση του παγκόσμιου ιστού. Πιο συγκεκριμένα έχει γίνει αντιληπτό από τους χρήστες το γεγονός ότι οι ιστοσελίδες με τις οποίες αλληλεπιδρούν καθημερινά σχεδόν είναι δημιουργήματα κάποιων άλλων χρηστών. Επίσης έχει γίνει αντιληπτό ότι ο κάθε χρήστης μπορεί να δημιουργήσει τη δική του ιστοσελίδα και μάλιστα να περιλάβει σε αυτή αναφορές προς μια άλλη ιστοσελίδα κάποιου άλλου χρήστη. Οι αναφορές αυτές όμως, συνήθως δεν εμφανίζονται απλά και μόνο με τη μορφή ενός υπερσυνδέσμου. Τις περισσότερες φορές υπάρχει και κείμενο που τις συνοδεύει και που παρέχει πληροφορίες για το περιεχόμενο της αναφερόμενης ιστοσελίδας. Σε αυτή τη διπλωματική εργασία περιγράφουμε μια μεθοδολογία για τον αυτόματο σημασιολογικό σχολιασμό του περιεχομένου ιστοσελίδων. Τα εργαλεία και οι τεχνικές που περιγράφονται βασίζονται σε δύο κύριες υποθέσεις. Πρώτον, οι άνθρωποι που δημιουργούν και διατηρούν ιστοσελίδες περιγράφουν άλλες ιστοσελίδες μέσα σε αυτές. Δεύτερον, οι άνθρωποι συνδέουν τις ιστοσελίδες τους με την εκάστοτε ιστοσελίδα την οποία περιγράφουν μέσω ενός συνδέσμου αγκύρωσης (anchor link) που είναι καθαρά σημαδεμένος με μία συγκεκριμένη ετικέτα (tag) μέσα στον εκάστοτε HTML κώδικα. Ο αυτόματος σημασιολογικός σχολιασμός που επιχειρούμε για μια ιστοσελίδα ισοδυναμεί με την εύρεση μιας ετικέτας (tag) ικανής να περιγράψει το περιεχόμενο της. Η εύρεση αυτής της ετικέτας είναι μια διαδικασία που βασίζεται σε μία συγκεκριμένη μεθοδολογία που αποτελείται από ένα συγκεκριμένο αριθμό βημάτων. Κάθε βήμα από αυτά υλοποιείται με τη χρήση διαφόρων εργαλείων και τεχνικών και τροφοδοτεί με την έξοδό του την είσοδο του επόμενου βήματος. Βασική ιδέα της μεθοδολογίας είναι η συλλογή αρκετών κειμένων αγκύρωσης (anchor texts), καθώς και ενός μέρους του γειτονικού τους κειμένου, για μία ιστοσελίδα. Η συλλογή αυτή προκύπτει ύστερα από επεξεργασία αρκετών ιστοσελίδων που περιέχουν υπερσυνδέσμους προς τη συγκεκριμένη ιστοσελίδα. Η σημασιολογική ετικέτα για μια ιστοσελίδα προκύπτει από την εφαρμογή διαφόρων τεχνικών γλωσσολογικής επεξεργασίας στη συλλογή των κειμένων που την αφορούν. Έτσι προκύπτει το τελικό συμπέρασμα για το σημασιολογικό σχολιασμό του περιεχομένου της ιστοσελίδας. / Nowadays the World Wide Web usage has evolved into a social phenomenon. It’s spread is constant and it’s increasing exponentially. During the years that have passed since it’s first appearance, the users have gained a certain level of experience and they have made some acceptances through this experience. They have understood that the web pages with which they interact in their everyday web activities, are creations from some other users. It has also become clear that every user can create his own web page and include in it references to some other pages of his liking. These references don’t simply exist as hyperlinks. Most of the time they are accompanied by some text which provides useful information about the referenced page’s content. In this diploma thesis we describe a methodology for the automatic annotation of a web page’s contents. The tools and techniques that are described, are based in two main hypotheses. First, humans that create web pages describe other web pages inside them. Second, humans connect their web pages with any web page they describe via an anchor link which is clearly described with a tag in each page’s HTML code. The automatic semantic annotation that we attempt here for a web page is the process of finding a tag able to describe the page’s contents. The finding of this tag is a process based in a certain methodology which consists of a number of steps. Each step of these is implemented using various tools and techniques and his output is the next step’s input. The basic idea behind our methodology is to collect as many anchor texts as possible, along with a window of words around them, for each web page. This collection is the result of a procedure which involves the processing of many web pages that contain hyperlinks to the web page which we want to annotate. The semantic tag for a web page is derived from the usage of certain natural language processing techniques in the collection of documents that refer to the web page. Thus the final conclusion for the web page’s contents annotation is extracted.
32

The Rumble in the Disambiguation Jungle : Towards the comparison of a traditional word sense disambiguation system with a novel paraphrasing system

Smith, Kelly January 2011 (has links)
Word sense disambiguation (WSD) is the process of computationally identifying and labeling poly- semous words in context with their correct meaning, known as a sense. WSD is riddled with various obstacles that must be overcome in order to reach its full potential. One of these problems is the aspect of the representation of word meaning. Traditional WSD algorithms make the assumption that a word in a given context has only one meaning and therfore can return only one discrete sense. On the other hand, a novel approach is that a given word can have multiple senses. Studies on graded word sense assignment (Erk et al., 2009) as well as in cognitive science (Hampton, 2007; Murphy, 2002) support this theory. It has therefore been adopted in a novel, paraphrasing system which performs word sense disambiguation by returning a probability distribution over potential paraphrases (in this case synonyms) of a given word. However, it is unknown how well this type of algorithm fares against the traditional one. The current study thus examines if and how it is possible to make a comparison of the two. A method of comparison is evaluated and subsequently rejected. Reasons for this as well as suggestions for a fair and accurate comparison are presented.
33

A Minimally Supervised Word Sense Disambiguation Algorithm Using Syntactic Dependencies and Semantic Generalizations

Faruque, Md. Ehsanul 12 1900 (has links)
Natural language is inherently ambiguous. For example, the word "bank" can mean a financial institution or a river shore. Finding the correct meaning of a word in a particular context is a task known as word sense disambiguation (WSD), which is essential for many natural language processing applications such as machine translation, information retrieval, and others. While most current WSD methods try to disambiguate a small number of words for which enough annotated examples are available, the method proposed in this thesis attempts to address all words in unrestricted text. The method is based on constraints imposed by syntactic dependencies and concept generalizations drawn from an external dictionary. The method was tested on standard benchmarks as used during the SENSEVAL-2 and SENSEVAL-3 WSD international evaluation exercises, and was found to be competitive.
34

Unsupervised Knowledge-based Word Sense Disambiguation: Exploration & Evaluation of Semantic Subgraphs

Manion, Steve Lawrence January 2014 (has links)
Hypothetically, if you were told: Apple uses the apple as its logo . You would immediately detect two different senses of the word apple , these being the company and the fruit respectively. Making this distinction is the formidable challenge of Word Sense Disambiguation (WSD), which is the subtask of many Natural Language Processing (NLP) applications. This thesis is a multi-branched investigation into WSD, that explores and evaluates unsupervised knowledge-based methods that exploit semantic subgraphs. The nature of research covered by this thesis can be broken down to: 1. Mining data from the encyclopedic resource Wikipedia, to visually prove the existence of context embedded in semantic subgraphs 2. Achieving disambiguation in order to merge concepts that originate from heterogeneous semantic graphs 3. Participation in international evaluations of WSD across a range of languages 4. Treating WSD as a classification task, that can be optimised through the iterative construction of semantic subgraphs The contributions of each chapter are ranged, but can be summarised by what has been produced, learnt, and raised throughout the thesis. Furthermore an API and several resources have been developed as a by-product of this research, all of which can be accessed by visiting the author’s home page at http://www.stevemanion.com. This should enable researchers to replicate the results achieved in this thesis and build on them if they wish.
35

Uma abordagem híbrida relacional para a desambiguação lexical de sentido na tradução automática / A hybrid relational approach for word sense disambiguation in machine translation

Specia, Lucia 28 September 2007 (has links)
A comunicação multilíngue é uma tarefa cada vez mais imperativa no cenário atual de grande disseminação de informações em diversas línguas. Nesse contexto, são de grande relevância os sistemas de tradução automática, que auxiliam tal comunicação, automatizando-a. Apesar de ser uma área de pesquisa bastante antiga, a Tradução Automática ainda apresenta muitos problemas. Um dos principais problemas é a ambigüidade lexical, ou seja, a necessidade de escolha de uma palavra, na língua alvo, para traduzir uma palavra da língua fonte quando há várias opções de tradução. Esse problema se mostra ainda mais complexo quando são identificadas apenas variações de sentido nas opções de tradução. Ele é denominado, nesse caso, \"ambigüidade lexical de sentido\". Várias abordagens têm sido propostas para a desambiguação lexical de sentido, mas elas são, em geral, monolíngues (para o inglês) e independentes de aplicação. Além disso, apresentam limitações no que diz respeito às fontes de conhecimento que podem ser exploradas. Em se tratando da língua portuguesa, em especial, não há pesquisas significativas voltadas para a resolução desse problema. O objetivo deste trabalho é a proposta e desenvolvimento de uma nova abordagem de desambiguação lexical de sentido, voltada especificamente para a tradução automática, que segue uma metodologia híbrida (baseada em conhecimento e em córpus) e utiliza um formalismo relacional para a representação de vários tipos de conhecimentos e de exemplos de desambiguação, por meio da técnica de Programação Lógica Indutiva. Experimentos diversos mostraram que a abordagem proposta supera abordagens alternativas para a desambiguação multilíngue e apresenta desempenho superior ou comparável ao do estado da arte em desambiguação monolíngue. Adicionalmente, tal abordagem se mostrou efetiva como mecanismo auxiliar para a escolha lexical na tradução automática estatística / Crosslingual communication has become a very imperative task in the current scenario with the increasing amount of information dissemination in several languages. In this context, machine translation systems, which can facilitate such communication by providing automatic translations, are of great importance. Although research in Machine Translation dates back to the 1950\'s, the area still has many problems. One of the main problems is that of lexical ambiguity, that is, the need for lexical choice when translating a source language word that has several translation options in the target language. This problem is even more complex when only sense variations are found in the translation options, a problem named \"sense ambiguity\". Several approaches have been proposed for word sense disambiguation, but they are in general monolingual (for English) and application-independent. Moreover, they have limitations regarding the types of knowledge sources that can be exploited. Particularly, there is no significant research aiming to word sense disambiguation involving Portuguese. The goal of this PhD work is the proposal and development of a novel approach for word sense disambiguation which is specifically designed for machine translation, follows a hybrid methodology (knowledge and corpus-based), and employs a relational formalism to represent various kinds of knowledge sources and disambiguation examples, by using Inductive Logic Programming. Several experiments have shown that the proposed approach overcomes alternative approaches in multilingual disambiguation and achieves higher or comparable results to the state of the art in monolingual disambiguation. Additionally, the approach has shown to effectively assist lexical choice in a statistical machine translation system
36

Klasifikátor pro sémantické vzory užívání anglických sloves / Classifier for semantic patterns of English verbs

Kríž, Vincent January 2012 (has links)
The goal of the diploma thesis is to design, implement and evaluate classifiers for automatic classification of semantic patterns of English verbs according to a pattern lexicon that draws on the Corpus Pattern Analysis. We use a pilot collection of 30 sample English verbs as training and test data sets. We employ standard methods of machine learning. In our experiments we use decision trees, k-nearest neighbourghs (kNN), support vector machines (SVM) and Adaboost algorithms. Among other things we concentrate on feature design and selection. We experiment with both morpho-syntactic and semantic features. Our results show that the morpho-syntactic features are the most important for statistically-driven semantic disambiguation. Nevertheless, for some verbs the use of semantic features plays an important role.
37

Syntactic and Semantic Analysis and Visualization of Unstructured English Texts

Karmakar, Saurav 14 December 2011 (has links)
People have complex thoughts, and they often express their thoughts with complex sentences using natural languages. This complexity may facilitate efficient communications among the audience with the same knowledge base. But on the other hand, for a different or new audience this composition becomes cumbersome to understand and analyze. Analysis of such compositions using syntactic or semantic measures is a challenging job and defines the base step for natural language processing. In this dissertation I explore and propose a number of new techniques to analyze and visualize the syntactic and semantic patterns of unstructured English texts. The syntactic analysis is done through a proposed visualization technique which categorizes and compares different English compositions based on their different reading complexity metrics. For the semantic analysis I use Latent Semantic Analysis (LSA) to analyze the hidden patterns in complex compositions. I have used this technique to analyze comments from a social visualization web site for detecting the irrelevant ones (e.g., spam). The patterns of collaborations are also studied through statistical analysis. Word sense disambiguation is used to figure out the correct sense of a word in a sentence or composition. Using textual similarity measure, based on the different word similarity measures and word sense disambiguation on collaborative text snippets from social collaborative environment, reveals a direction to untie the knots of complex hidden patterns of collaboration.
38

Context-aware semantic analysis of video metadata

Steinmetz, Nadine January 2013 (has links)
Im Vergleich zu einer stichwortbasierten Suche ermöglicht die semantische Suche ein präziseres und anspruchsvolleres Durchsuchen von (Web)-Dokumenten, weil durch die explizite Semantik Mehrdeutigkeiten von natürlicher Sprache vermieden und semantische Beziehungen in das Suchergebnis einbezogen werden können. Eine semantische, Entitäten-basierte Suche geht von einer Anfrage mit festgelegter Bedeutung aus und liefert nur Dokumente, die mit dieser Entität annotiert sind als Suchergebnis. Die wichtigste Voraussetzung für eine Entitäten-zentrierte Suche stellt die Annotation der Dokumente im Archiv mit Entitäten und Kategorien dar. Textuelle Informationen werden analysiert und mit den entsprechenden Entitäten und Kategorien versehen, um den Inhalt semantisch erschließen zu können. Eine manuelle Annotation erfordert Domänenwissen und ist sehr zeitaufwendig. Die semantische Annotation von Videodokumenten erfordert besondere Aufmerksamkeit, da inhaltsbasierte Metadaten von Videos aus verschiedenen Quellen stammen, verschiedene Eigenschaften und Zuverlässigkeiten besitzen und daher nicht wie Fließtext behandelt werden können. Die vorliegende Arbeit stellt einen semantischen Analyseprozess für Video-Metadaten vor. Die Eigenschaften der verschiedenen Metadatentypen werden analysiert und ein Konfidenzwert ermittelt. Dieser Wert spiegelt die Korrektheit und die wahrscheinliche Mehrdeutigkeit eines Metadatums wieder. Beginnend mit dem Metadatum mit dem höchsten Konfidenzwert wird der Analyseprozess innerhalb eines Kontexts in absteigender Reihenfolge des Konfidenzwerts durchgeführt. Die bereits analysierten Metadaten dienen als Referenzpunkt für die weiteren Analysen. So kann eine möglichst korrekte Analyse der heterogen strukturierten Daten eines Kontexts sichergestellt werden. Am Ende der Analyse eines Metadatums wird die für den Kontext relevanteste Entität aus einer Liste von Kandidaten identifiziert - das Metadatum wird disambiguiert. Hierfür wurden verschiedene Disambiguierungsalgorithmen entwickelt, die Beschreibungstexte und semantische Beziehungen der Entitätenkandidaten zum gegebenen Kontext in Betracht ziehen. Der Kontext für die Disambiguierung wird für jedes Metadatum anhand der Eigenschaften und Konfidenzwerte zusammengestellt. Der vorgestellte Analyseprozess ist an zwei Hypothesen angelehnt: Um die Analyseergebnisse verbessern zu können, sollten die Metadaten eines Kontexts in absteigender Reihenfolge ihres Konfidenzwertes verarbeitet werden und die Kontextgrenzen von Videometadaten sollten durch Segmentgrenzen definiert werden, um möglichst Kontexte mit kohärentem Inhalt zu erhalten. Durch ausführliche Evaluationen konnten die gestellten Hypothesen bestätigt werden. Der Analyseprozess wurden gegen mehrere State-of-the-Art Methoden verglichen und erzielt verbesserte Ergebnisse in Bezug auf Recall und Precision, besonders für Metadaten, die aus weniger zuverlässigen Quellen stammen. Der Analyseprozess ist Teil eines Videoanalyse-Frameworks und wurde bereits erfolgreich in verschiedenen Projekten eingesetzt. / The Semantic Web provides information contained in the World Wide Web as machine-readable facts. In comparison to a keyword-based inquiry, semantic search enables a more sophisticated exploration of web documents. By clarifying the meaning behind entities, search results are more precise and the semantics simultaneously enable an exploration of semantic relationships. However, unlike keyword searches, a semantic entity-focused search requires that web documents are annotated with semantic representations of common words and named entities. Manual semantic annotation of (web) documents is time-consuming; in response, automatic annotation services have emerged in recent years. These annotation services take continuous text as input, detect important key terms and named entities and annotate them with semantic entities contained in widely used semantic knowledge bases, such as Freebase or DBpedia. Metadata of video documents require special attention. Semantic analysis approaches for continuous text cannot be applied, because information of a context in video documents originates from multiple sources possessing different reliabilities and characteristics. This thesis presents a semantic analysis approach consisting of a context model and a disambiguation algorithm for video metadata. The context model takes into account the characteristics of video metadata and derives a confidence value for each metadata item. The confidence value represents the level of correctness and ambiguity of the textual information of the metadata item. The lower the ambiguity and the higher the prospective correctness, the higher the confidence value. The metadata items derived from the video metadata are analyzed in a specific order from high to low confidence level. Previously analyzed metadata are used as reference points in the context for subsequent disambiguation. The contextually most relevant entity is identified by means of descriptive texts and semantic relationships to the context. The context is created dynamically for each metadata item, taking into account the confidence value and other characteristics. The proposed semantic analysis follows two hypotheses: metadata items of a context should be processed in descendent order of their confidence value, and the metadata that pertains to a context should be limited by content-based segmentation boundaries. The evaluation results support the proposed hypotheses and show increased recall and precision for annotated entities, especially for metadata that originates from sources with low reliability. The algorithms have been evaluated against several state-of-the-art annotation approaches. The presented semantic analysis process is integrated into a video analysis framework and has been successfully applied in several projects for the purpose of semantic video exploration of videos.
39

Análise de sentimento e desambiguação no contexto da tv social

Lima, Ana Carolina Espírito Santo 14 December 2012 (has links)
Made available in DSpace on 2016-03-15T19:37:43Z (GMT). No. of bitstreams: 1 Ana Carolina Espirito Santo Lima.pdf: 2485278 bytes, checksum: 9843b9f756f82c023af6a2ee291f2b1d (MD5) Previous issue date: 2012-12-14 / Fundação de Amparo a Pesquisa do Estado de São Paulo / Social media have become a way of expressing collective interests. People are motivated by the sharing of information and the feedback from friends and colleagues. Among the many social media tools available, the Twitter microblog is gaining popularity as a platform for in-stantaneous communication. Millions of messages are generated daily, from over 100 million users, about the most varied subjects. As it is a rapid communication platform, this microblog spurred a phenomenon called television storytellers, where surfers comment on what they watch on TV while the programs are being transmitted. The Social TV emerged from this integration between social media and television. The amount of data generated on the TV shows is a rich material for data analysis. Broadcasters may use such information to improve their programs and increase interaction with their audience. Among the main challenges in social media data analysis there is sentiment analysis (to determine the polarity of a text, for instance, positive or negative), and sense disambiguation (to determine the right context of polysemic words). This dissertation aims to use machine learning techniques to create a tool to support Social TV, contributing specifically to the automation of sentiment analysis and disambiguation of Twitter messages. / As mídias sociais são uma forma de expressão dos interesses coletivos, as pessoas gostam de compartilhar informações e sentem-se valorizadas por causa disso. Entre as mídias sociais o microblog Twitter vem ganhando popularidade como uma plataforma para comunicação ins-tantânea. São milhões de mensagens geradas todos os dias, por cerca de 100 milhões de usuá-rios, carregadas dos mais diversos assuntos. Por ser uma plataforma de comunicação rápida esse microblog estimulou um fenômeno denominado narradores televisivos, em que os inter-nautas comentam sobre o que assistem na TV no momento em que é transmitido. Dessa inte-gração entre as mídias sociais e a televisão emergiu a TV Social. A quantidade de dados gera-dos sobre os programas de TV formam um rico material para análise de dados. Emissoras podem usar tais informações para aperfeiçoar seus programas e aumentar a interação com seu público. Dentre os principais desafios da análise de dados de mídias sociais encontram-se a análise de sentimento (determinação de polaridade em um texto, por exemplo, positivo ou negativo) e a desambiguação de sentido (determinação do contexto correto de palavras polis-sêmicas). Essa dissertação tem como objetivo usar técnicas de aprendizagem de máquina para a criação de uma ferramenta de apoio à TV Social com contribuições na automatização dos processos de análise de sentimento e desambiguação de sentido de mensagens postadas no Twitter.
40

Uma abordagem híbrida relacional para a desambiguação lexical de sentido na tradução automática / A hybrid relational approach for word sense disambiguation in machine translation

Lucia Specia 28 September 2007 (has links)
A comunicação multilíngue é uma tarefa cada vez mais imperativa no cenário atual de grande disseminação de informações em diversas línguas. Nesse contexto, são de grande relevância os sistemas de tradução automática, que auxiliam tal comunicação, automatizando-a. Apesar de ser uma área de pesquisa bastante antiga, a Tradução Automática ainda apresenta muitos problemas. Um dos principais problemas é a ambigüidade lexical, ou seja, a necessidade de escolha de uma palavra, na língua alvo, para traduzir uma palavra da língua fonte quando há várias opções de tradução. Esse problema se mostra ainda mais complexo quando são identificadas apenas variações de sentido nas opções de tradução. Ele é denominado, nesse caso, \"ambigüidade lexical de sentido\". Várias abordagens têm sido propostas para a desambiguação lexical de sentido, mas elas são, em geral, monolíngues (para o inglês) e independentes de aplicação. Além disso, apresentam limitações no que diz respeito às fontes de conhecimento que podem ser exploradas. Em se tratando da língua portuguesa, em especial, não há pesquisas significativas voltadas para a resolução desse problema. O objetivo deste trabalho é a proposta e desenvolvimento de uma nova abordagem de desambiguação lexical de sentido, voltada especificamente para a tradução automática, que segue uma metodologia híbrida (baseada em conhecimento e em córpus) e utiliza um formalismo relacional para a representação de vários tipos de conhecimentos e de exemplos de desambiguação, por meio da técnica de Programação Lógica Indutiva. Experimentos diversos mostraram que a abordagem proposta supera abordagens alternativas para a desambiguação multilíngue e apresenta desempenho superior ou comparável ao do estado da arte em desambiguação monolíngue. Adicionalmente, tal abordagem se mostrou efetiva como mecanismo auxiliar para a escolha lexical na tradução automática estatística / Crosslingual communication has become a very imperative task in the current scenario with the increasing amount of information dissemination in several languages. In this context, machine translation systems, which can facilitate such communication by providing automatic translations, are of great importance. Although research in Machine Translation dates back to the 1950\'s, the area still has many problems. One of the main problems is that of lexical ambiguity, that is, the need for lexical choice when translating a source language word that has several translation options in the target language. This problem is even more complex when only sense variations are found in the translation options, a problem named \"sense ambiguity\". Several approaches have been proposed for word sense disambiguation, but they are in general monolingual (for English) and application-independent. Moreover, they have limitations regarding the types of knowledge sources that can be exploited. Particularly, there is no significant research aiming to word sense disambiguation involving Portuguese. The goal of this PhD work is the proposal and development of a novel approach for word sense disambiguation which is specifically designed for machine translation, follows a hybrid methodology (knowledge and corpus-based), and employs a relational formalism to represent various kinds of knowledge sources and disambiguation examples, by using Inductive Logic Programming. Several experiments have shown that the proposed approach overcomes alternative approaches in multilingual disambiguation and achieves higher or comparable results to the state of the art in monolingual disambiguation. Additionally, the approach has shown to effectively assist lexical choice in a statistical machine translation system

Page generated in 0.1716 seconds