• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Using Roget's thesaurus to determine the similarity of texts

Ellman, Jeremy January 2000 (has links)
No description available.
2

Lexical Chains and Sliding Locality Windows in Content-based Text Similarity Detection

Nahnsen, Thade, Uzuner, Ozlem, Katz, Boris 19 May 2005 (has links)
We present a system to determine content similarity of documents. More specifically, our goal is to identify book chapters that are translations of the same original chapter; this task requires identification of not only the different topics in the documents but also the particular flow of these topics. We experiment with different representations employing n-grams of lexical chains and test these representations on a corpus of approximately 1000 chapters gathered from books with multiple parallel translations. Our representations include the cosine similarity of attribute vectors of n-grams of lexical chains, the cosine similarity of tf*idf-weighted keywords, and the cosine similarity of unweighted lexical chains (unigrams of lexical chains) as well as multiplicative combinations of the similarity measures produced by these approaches. Our results identify fourgrams of unordered lexical chains as a particularly useful representation for text similarity evaluation.
3

An Investigation of Word Sense Disambiguation for Improving Lexical Chaining

Enss, Matthew January 2006 (has links)
This thesis investigates how word sense disambiguation affects lexical chains, as well as proposing an improved model for lexical chaining in which word sense disambiguation is performed prior to lexical chaining. A lexical chain is a set of words from a document that are related in meaning. Lexical chains can be used to identify the dominant topics in a document, as well as where changes in topic occur. This makes them useful for applications such as topic segmentation and document summarization. <br /><br /> However, polysemous words are an inherent problem for algorithms that find lexical chains as the intended meaning of a polysemous word must be determined before its semantic relations to other words can be determined. For example, the word "bank" should only be placed in a chain with "money" if in the context of the document "bank" refers to a place that deals with money, rather than a river bank. The process by which the intended senses of polysemous words are determined is word sense disambiguation. To date, lexical chaining algorithms have performed word sense disambiguation as part of the overall process building lexical chains. Because the intended senses of polysemous words must be determined before words can be properly chained, we propose that word sense disambiguation should be performed before lexical chaining occurs. Furthermore, if word sense disambiguation is performed prior to lexical chaining, then it can be done with any available disambiguation method, without regard to how lexical chains will be built afterwards. Therefore, the most accurate available method for word sense disambiguation should be applied prior to the creation of lexical chains. <br /><br /> We perform an experiment to demonstrate the validity of the proposed model. We compare the lexical chains produced in two cases: <ol> <li>Lexical chaining is performed as normal on a corpus of documents that has not been disambiguated. </li> <li>Lexical chaining is performed on the same corpus, but all the words have been correctly disambiguated beforehand. </li></ol> We show that the lexical chains created in the second case are more correct than the chains created in the first. This result demonstrates that accurate word sense disambiguation performed prior to the creation of lexical chains does lead to better lexical chains being produced, confirming that our model for lexical chaining is an improvement upon previous approaches.
4

An Investigation of Word Sense Disambiguation for Improving Lexical Chaining

Enss, Matthew January 2006 (has links)
This thesis investigates how word sense disambiguation affects lexical chains, as well as proposing an improved model for lexical chaining in which word sense disambiguation is performed prior to lexical chaining. A lexical chain is a set of words from a document that are related in meaning. Lexical chains can be used to identify the dominant topics in a document, as well as where changes in topic occur. This makes them useful for applications such as topic segmentation and document summarization. <br /><br /> However, polysemous words are an inherent problem for algorithms that find lexical chains as the intended meaning of a polysemous word must be determined before its semantic relations to other words can be determined. For example, the word "bank" should only be placed in a chain with "money" if in the context of the document "bank" refers to a place that deals with money, rather than a river bank. The process by which the intended senses of polysemous words are determined is word sense disambiguation. To date, lexical chaining algorithms have performed word sense disambiguation as part of the overall process building lexical chains. Because the intended senses of polysemous words must be determined before words can be properly chained, we propose that word sense disambiguation should be performed before lexical chaining occurs. Furthermore, if word sense disambiguation is performed prior to lexical chaining, then it can be done with any available disambiguation method, without regard to how lexical chains will be built afterwards. Therefore, the most accurate available method for word sense disambiguation should be applied prior to the creation of lexical chains. <br /><br /> We perform an experiment to demonstrate the validity of the proposed model. We compare the lexical chains produced in two cases: <ol> <li>Lexical chaining is performed as normal on a corpus of documents that has not been disambiguated. </li> <li>Lexical chaining is performed on the same corpus, but all the words have been correctly disambiguated beforehand. </li></ol> We show that the lexical chains created in the second case are more correct than the chains created in the first. This result demonstrates that accurate word sense disambiguation performed prior to the creation of lexical chains does lead to better lexical chains being produced, confirming that our model for lexical chaining is an improvement upon previous approaches.
5

Εξόρυξη θεματικών αλυσίδων από ιστοσελίδες για την δημιουργία ενός θεματολογικά προσανατολισμένου προσκομιστή / Lexical chain extraction for the creation of a topical focused crawler

Κοκόσης, Παύλος 16 May 2007 (has links)
Οι θεματολογικά προσανατολισμένοι προσκομιστές είναι εφαρμογές που έχουν στόχο την συλλογή ιστοσελίδων συγκεκριμένης θεματολογίας από τον Παγκόσμιο Ιστό. Αποτελούν ένα ανοικτό ερευνητικό πεδίο των τελευταίων χρόνων. Σε αυτήν την διπλωματική εργασία επιχειρείται η υλοποίηση ενός θεματολογικά προσανατολισμένου προσκομιστή με χρήση λεξικών αλυσίδων. Οι λεξικές αλυσίδες είναι ένα σημαντικό λεξιλογικό και υπολογιστικό εργαλείο για την αναπαράσταση της έννοιας ενός κειμένου. Έχουν χρησιμοποιηθεί με επιτυχία στην αυτόματη δημιουργία περιλήψεων για κείμενα, αλλά και στην κατηγοριοποίησή τους σε θεματικές κατηγορίες. Παρουσιάζουμε τις διαδικασίες βαθμολόγησης συνδέσμων και ιστοσελίδων, καθώς και τον υπολογισμό της σημασιολογικής ομοιότητας μεταξύ κειμένων με χρήση λεξικών αλυσίδων. Συνδυάζουμε και ενσωματώνουμε αυτές τις διαδικασίες σε έναν θεματολογικά προσανατολισμένο προσκομιστή, τα πειραματικά αποτελέσματα του οποίου είναι πολλά υποσχόμενα. / Topical focused crawlers are applications that aim at collecting web pages of a specific topic from the Web. Building topical focused crawlers is an open research field. In this master thesis we develop a topical focused crawler using lexical chains. Lexical chains are an important lexical and computational tool which is used for representing the meaning of text. They have been used with success in automatic text summarization and text classification in thematic categories. We present the processes of hyperlink and web page scoring, as well as the computation of the semantic similarity between documents by using lexical chains. Combining the aforementioned methods we embody them in a topical focused crawler. Its results are very promising.

Page generated in 0.0504 seconds